Search Results

Search found 2337 results on 94 pages for 'green hyena'.

Page 53/94 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Assigning valid moves on board game

    - by Kunal4536
    I am making a board game in unity 4.3 2d similar to checkers. I have added an empty object to all the points where player can move and added a box collider to each empty object.I attached a click to move script to each player token. Now I want to assign valid moves. e.g. as shown in picture... Players can only move on vertex of each square.Player can only move to adjacent vertex.Thus it can only move from red spot to yellow and cannot move to blue spot.There is another condition which is : if there is the token of another player at the yellow spot then the player cannot move to that spot. Instead it will have to go from red to green spot. How can I find the valid moves of the player by scripting. I have another problem with click to move. When I click all the objects move to that position.But I only want to move a single token. So what can i add to script to select a specific object and then click to move the specific object.Here is my script for click to move. var obj:Transform; private var hitPoint : Vector3; private var move: boolean = false; private var startTime:float; var speed = 1; function Update () { if(Input.GetKeyDown(KeyCode.Mouse0)) { var hit : RaycastHit; // no point storing this really var ray = Camera.main.ScreenPointToRay (Input.mousePosition); if (Physics.Raycast (ray, hit, 10000)) { hitPoint = hit.point; move = true; startTime = Time.time; } } if(move) { obj.position = Vector3.Lerp(obj.position, hitPoint, Time.deltaTime * speed); if(obj.position == hitPoint) { move = false; } } }`

    Read the article

  • cocos2d/OpenGL multitexturing problem

    - by Gajoo
    I've got a simple shader to test multitextureing the problem is both samplers are using same image as their reference. the shader code is basically just this : vec4 mid = texture2D(u_texture,v_texCoord); float g = texture2D(u_guide,v_guideCoord); gl_FragColor = vec4(g , mid.g,0,1); and this is how I'm calling draw function : int last_State; glGetIntegerv(GL_ACTIVE_TEXTURE, &last_State); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, getTexture()->getName()); glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, mGuideTexture->getName()); ccGLEnableVertexAttribs( kCCVertexAttribFlag_TexCoords |kCCVertexAttribFlag_Position); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, texCoord); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_TEXTURE_2D); I've already check mGuideTexture->getName() and getTexture()->getName() are returning correct textures. but looking at the result I can tell, both samplers are reading from getTexture()->getName(). here are some screen shots showing what is happening : The image rendered Using above codes The image rendered when I change textures passed to samples I'm expecting to see green objects from the first picture with red objects hanging from the top.

    Read the article

  • AndEngine Box2d game

    - by OneMoreVladimir
    I'm developing a 2d survival shooter using Box2d extension and I've got some questions: I have two AnalogOnScreenControls. Their listeners modify both sprites and bodies. I receive TouchEventPool was exhausted and as their number grows the game crashes accidently. I've tried to put the modification of the bodies and sprites on the UpdateThread but that does not solve the problem. What could be the cause? I have a class that at the beginnig of the game loads all the textures. After I relaunch the game activity several times I receive Unable to find Phys Addr for and "green color" interface. But that doesn't happen if I clear the memory manually through the Task Manager before relaunch What could be the cause? I unload my atlas at the end of the game. The game sometimes crashes at start with NullPointerException in onResumeGame. The solution suggested is to set android:configChanges="orientation|screenSize" but my device is API 10 so it doesn't have screenSize property and orientation only does not seem to help, because the game starts in portrait mode at times (though landscape is set in the code)

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

  • How does a CS student negotiate in/after a job interview?

    - by Billy ONeal
    Alright, I've gotten to the second step in the interview process. At this point I'm working under the assumption that I might be offered a position -- flying my butt to Redmond would be quite an expense if they weren't at least considering me for something (*crosses fingers*). So, if one is offered a position, how should a CS student negotiate? I've heard a few strategies about dealing with software companies when you are being considered for a hire, but most of them are considering the developer in a powerful position. In such examinations, (s)he has lots of job experience, and may even be overqualified for what the employer is looking for. (s)he is part of a small job market of qualified developers, because 99% of applications companies receive are from those who are woefully under qualified. I'm in a completely different position. I think I compare favorably to most of my fellow students, and I have been a programmer for almost 10 years, but often I still feel green compared to most of my coworkers. I'm in a position where the employer holds most of the chips; they'd be doing me quite a favor by hiring me. I think this scenario is considerably different than the targets for most of the advice I've seen. Above all, I don't want to be such a prick negotiating that it damages my chances to actually operate in a position, even if it means not negotiating at all. How should one approach a scenario like this? P.S. If this is off topic feel free to close it -- I think it's borderline and I'm of the opinion that it's better to ask and be closed than not ask at all ;)

    Read the article

  • Wrong faces culled in OpenGL when drawing a rectangular prism

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • my web cam not working prperly in skype 12.04

    - by elvin
    oller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801EB (ICH5) SATA Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801EB/ER (ICH5/ICH5R) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) AC'97 Audio Controller (rev 02) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:08.0 Ethernet controller: Intel Corporation 82562EZ 10/100 Ethernet Controller (rev 01) This is the configuration of my web cam when i typed lspci in terminal; how do i find out what type of cam it is. the command letters i wrote here may be wrong.never mind i wrote from my memory.my web cam shows green colour shade and no proper colour.pls help me to fix this problem.2nd option- if i buy microsoft web cam will the image will get corrected.i think my web cam is hyundai electronics make.

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • External monitor problems with Asus UX32VD in 12.04

    - by rsilva
    I've posted this video where I show one of the problems I'm facing with my UX32VD in regard of external monitors. As you can see as soon as I plugged the external monitor the laptop screen goes black and the external monitor displays a single color (color changes between red, green and blue). However this does not always happen. For instance if I reboot the laptop with the external monitor already plugged in I get to use it but not the laptop screen. In the Displays settings the laptop screen it's not even listed. Other times it just works. The to screens, without any problems. I tried with other to different external monitor and all this apparently random issues repeat, I even tried using HDMI cable instead. Same random behavior. This question seams to cover one of this issues however the answer did not helped me. I was forced to use the 3.5.2 kernel in order to use the Intel graphics card else the Gallium something was used. Also I've Bumblebee installed.

    Read the article

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5)

    - by hinkmond
    So, here's the finished product. I have 8 networked Raspberry Pi devices strategically placed around our Oracle Santa Clara Building 21 office. I attached a JFET transistor based EMF sensor on each device to capture any strange fluctuations in the electromagnetic field (which supposedly, paranormal spirits can change as they pass by). And, I have have a Web app (embedded in this page) which can take the readings and show a graphical display in real-time. As you can see, all the Raspberry Pi devices are blinking away green, indicating they are all operational and all sensors are working correctly. But, I don't see anything... Darn... Maybe, I have to stare at the Web app for a while. I don't know when the "alleged" ghosts in our Oracle Santa Clara office are supposed to be active, but let me know if you see anything... Oh, and by the way, Happy Halloween from the Internet of Spooky Things! See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5) Hinkmond

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

  • MQTT, GWT, ActiveMQ stack to bring jms to the browser

    - by scphantm
    I am in the preliminary stages of architecting a legacy replacement project. They already have sub half second performance on their green screens and they want the same on their web app. We have a 390 mainframe that can handle anything we throw at it but they don't have a good jvm for it, so we have two tiers of websphere servers between the mainframe and the browser, The ui server, and the bl server. For the ui, I'm leaning towards GWT. But one thing that I think would seal the deal is to add messaging capabilities to the browser. The idea is say you click on a link that displays a second panel of information, instead of the classic GWT where it triggers a GWT-RPC call to the ui server, the ui server routs it to the bl server, the bl sends it to the mainframe and back out, it drops an MQTT message directly to the bl server or directly to the mainframe. Say writes go to the bl, reads go to the mainframe. This is an easy enough thing in classic jms because you can issue a message that has an expected response. Then have your callback ready to get the resonse. But from what I'm reading so far. It looks like mqtt doesn't have that. It looks like it's strictly fire and forget, which would make it really tough to come up with a way to get a response back to the workstation that called it. Am I right here? Has anyone tried this stack before with gwt.

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance.

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Precise: Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. There is apparently some problem with syncing and IMAP. Helper questios Could be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here)

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >