Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 460/1027 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Reacting to rectangle on rectangle collisions

    - by mcjohnalds45
    I don't know how to react to collisions between two axis aligned rectangles that have x, y, width and height values (x and y are from the centre of the box) to make them simply not overlap. I figured I'd just make them move away from each other depending on how far they intersect in the opposite direction (left, right, up or down) of where they collided. If I check for collisions only on the x axis or only on the y axis it works fine, but when checking for both collisions crazy stuff happens. This code executes when the first box collides with the second. It's in lua but feel free to answer in anything that isn't to too counter-intuitive. if box1.x < box2.x then box1.x = box1.x + box2.x - box1.x - (box1.width / 2) - (box2.width / 2) end if box1.x > box2.x then box1.x = box1.x - (box1.x - box2.x - (box1.width / 2) - (box2.width / 2)) end if box1.y < box2.y then box1.y = box1.y + box2.y - box1.y - (box1.height / 2) - (box2.height / 2) end if box1.y > box2.y then box1.y = box1.y - (box1.y - box2.y - (box1.height / 2) - (box2.height / 2)) end

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Does SFML render graphics outside the window?

    - by ThePlan
    While working on a tile-based map I figured it would be a good idea if I would only render what the player sees on the game window, but then it occurred to me that SFML could already be optimized enough to know when it doesn't have to render those things. Let's say I draw a 30x30 squared maps (A medium one) but the player only sees a bunch of them, not entirely. Would SFML automatically hide what the player doesn't see, or should I hide it myself?

    Read the article

  • Is it ok to initialize an RB_ConstraintActor in PostBeginPlay?

    - by Almo
    I have a KActorSpawnable subclass that acts weird. In PostBeginPlay, I initialize an RB_ConstraintActor; the default is not to allow rotation. If I create one in the editor, it's fine, and won't rotate. If I spawn one, it rotates. Here's the class: class QuadForceKActor extends KActorSpawnable placeable; var(Behavior) bool bConstrainRotation; var(Behavior) bool bConstrainX; var(Behavior) bool bConstrainY; var(Behavior) bool bConstrainZ; var RB_ConstraintActor PhysicsConstraintActor; simulated event PostBeginPlay() { Super.PostBeginPlay(); PhysicsConstraintActor = Spawn(class'RB_ConstraintActorSpawnable', self, '', Location, rot(0, 0, 0)); if(bConstrainRotation) { PhysicsConstraintActor.ConstraintSetup.bSwingLimited = true; PhysicsConstraintActor.ConstraintSetup.bTwistLimited = true; } SetLinearConstraints(bConstrainX, bConstrainY, bConstrainZ); PhysicsConstraintActor.InitConstraint(self, None); } function SetLinearConstraints(bool InConstrainX, bool InConstrainY, bool InConstrainZ) { if(InConstrainX) { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 0; } if(InConstrainY) { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 0; } if(InConstrainZ) { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 0; } } DefaultProperties { bConstrainRotation=true bConstrainX=false bConstrainY=false bConstrainZ=false bSafeBaseIfAsleep=false bNoEncroachCheck=false } Here's the code I use to spawn one. It's a subclass of the one above, but it doesn't reference the constraint at all. local QuadForceKCreateBlock BlockActor; BlockActor = spawn(class'QuadForceKCreateBlock', none, 'PowerCreate_Block', BlockLocation(), m_PreparedRotation, , false); BlockActor.SetDuration(m_BlockDuration); BlockActor.StaticMeshComponent.SetNotifyRigidBodyCollision(true); BlockActor.StaticMeshComponent.ScriptRigidBodyCollisionThreshold = 0.001; BlockActor.StaticMeshComponent.SetStaticMesh(m_ValidCreationBlock.StaticMesh); BlockActor.StaticMeshComponent.AddImpulse(m_InitialVelocity); I used to initialize an RB_ConstraintActor where I spawned it from the outside. This worked, which is why I'm pretty sure it has nothing to do with the other code in QuadForceKCreateBlock. I then added the internal constraint in QuadForceKActor for other purposes. When I realized I had two constraints on the CreateBlock doing the same thing, I removed the constraint code from the place where I spawn it. Then it started rotating. Is there a reason I should not be initializing an RB_ConstraintActor in PostBeginPlay? I feel like there's some basic thing about how the engine works that I'm missing.

    Read the article

  • Slick & NiftyGUI. Nifty initialize exception

    - by Romeo
    I found my self into trouble when trying to run a Slick game with a Nifty Game State. This is the code: @Override protected void initGameAndGUI(GameContainer container, StateBasedGame game) throws SlickException { initNifty(container, game); } If i run this i get: java.lang.IllegalStateException: The NiftyGUI was already initialized. Its illegal to do so twice. If i delete the call to initNifty() i get another exception:java.lang.IllegalStateException: NiftyGUI was not initialized.

    Read the article

  • Interpolating between two networked states?

    - by Vaughan Hilts
    I have many entities on the client side that are simulated (their velocities are added to their positions on a per frame basis) and I let them dead reckon themselves. They send updates about where they were last seen and their velocity changes. This works great and other players see this work find. However, after a while these players begin to desync after some time. This is because of latency. I'd like to know how I can interpolate between states so they appear to be in the correct position. I know where the player was LAST seen and their current velocity but interpolating to the last seen state causes the player to actually move -backwards-. I could not use velocity at all for other clients and simply 'lerp' them towards the appropriate direction but I feel this would cause jaggy movement. What are the alternatives?

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • How to do directional per fragment lighting in world space?

    - by user
    I am attempting to create a GLSL shader for simple, per-fragment directional light. So far, after following many tutorials, I have continually ran into the issue: my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem?

    Read the article

  • How can I achieve strong typing with a component messaging system?

    - by Vaughan Hilts
    I'm looking at implementing a messaging system in my entity component system. I've deduced that I can use an event / queue for passing messages, but right now, I just use a generic object and cast out the data I want. I also considered using a dictionary. I see a lot of information on this, but they all involve a lot of casting and guessing. Is there any way to do this elegantly and keep strong typing on my messages?

    Read the article

  • When I shoot from a gun while walking, the bullet is off the center, but when stand still it's fine

    - by Vlad1k
    I am making a small project in Unity, and whenever I walk with the gun and shoot at the same time, the bullets seem to curve and shoot off 2-3 CMs from the center. When I stand still this doesn't happen. This is my main Javascript code: @script RequireComponent(AudioSource) var projectile : Rigidbody; var speed = 500; var ammo = 30; var fireRate = 0.1; private var nextFire = 0.0; function Update() { if(Input.GetButton ("Fire1") && Time.time > nextFire) { if(ammo != 0) { nextFire = Time.time + fireRate; var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); ammo = ammo - 1; audio.Play(); } else { } } } I assume that these two lines need to be tweaked: var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); Thanks in advanced, and please remember that I just started Unity, and I might have a difficult time understanding some things. Thanks!

    Read the article

  • Check for bodies within a specific circle in Box2D

    - by ltjax
    I'm trying to find positions to insert new bodies into my world. For that, I'd like to have a "free" spot where this body wouldn't overlap with anything else. So my plan was to sample "random" positions and check whether they overlap with my "potential" new body. Since my bodies are always circular, I'd need to test within a given circle. So far, the only way to use box2d for this seems to use b2World::QueryAABB around my circle and manually doing an overlap test with all the fixtures it gives me (Box2D doesn't event seem to allow me to tap into its overlapping tests?!). It seems to me like Box2D should already provide such functionality - is there a way that lets me do this without reinventing most of the wheel again?

    Read the article

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Maya Animated Character export for XNA 4.0 problem

    - by FahidK
    To begin with, I'm trying to export an animated character in .fbx format from Maya 2013 to XNA 4.0 In Maya, The Model has a basic rig and the animations are in clips made in the Trax editor. so the issue i'm having is after selecting the model and the root joint and then hitting export in .fbx format, for some reason when i open the exported .fbx file the joint system is detached from the model with no animation. Btw, i have the animations in clips so that they can be called in code, for example "run","walk","attack". So, what can i do to solve this problem? Thank you.

    Read the article

  • SFX Played Once per Collision or Hit

    - by David Dimalanta
    I have a question about using Box2D (engine for LibGDX used to make realistic physics). I observed on the code that I've made for the physics here below: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } Now, I'm thinking which part o line should I implement for the sound effects. My objectives to make SFX played once for every collision (Or should I say "SFX played once per collision"?) on the following: SFX played once if they hit on the objects of its kind. (e.g. apple vs. apple) SFX played once on a different sound when it hit on the ground. (e.g. apple land on the mud) Take note that I'm using Box2D for the Java programming version thanks to LibGDX via Box2D engine and I edited the physics body using Physics Body Editor before I implement it to code. I tried to check every available methods for body, fixture definition, or body definition to code for the SFX when hit but it seems only for the gravity and weight. Is there possibly available on the document for SFX played when collision happens if possible?

    Read the article

  • 2D Car Simulation with Throttle Linear Physics

    - by James
    I'm trying to make a simulation game for an automatic cruise control system. The system simulates a car on varying inclinations and throttle speeds. I've coded up to the car physics but these do note make sense. The dynamics of the simulation are specified as follows: a = V' - V T = (k1)V + ?(k2) + ma V' = (1 - (k1 / m) V) + T - ( k2 / m) * ? Where T = throttle position k1 = viscous friction V = speed V' = next speed ? = angle of incline k2 = m g sin ? a = acceleration m = mass Notice that the angle of incline in the equation is not chopped up by sin or cos. Even the equation for acceleration isn't right. Can anyone correct them or am I misinterpreting the physics?

    Read the article

  • HLSL problem with divide by homogeneous component

    - by Berend
    When I try to divide my position.z by my position.w in HLSL I get as result always 1.0f or higher. Is this a common problem for some reason? When I divide my position.x or y by the w this works fine. But the divide for the z gives a wrong result. I use the view matrix for my camera and the projection matrix as i use it in the game because I want to create a depthmap from the cameraposition. Can anybody explain what I'm doing wrong? Do I need another view matrix?

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

  • Good GUI for OpenGL

    - by Cristina
    I am starting to learn OpenGL with FreeGLUT using the Superbible and the knowledge i have from my elementary graphics to brush up on my skills. To get more from this experience i want to integrate a GUI to overwrite the one FreeGLUT uses, now my question is this: is this thing possible and what library should i use? Some characteristics for the library: Open source Multi-platform (Linux and Windows) C/C++ If you have any other recommendations please feel free to post them along with your answers for my problem.

    Read the article

  • Need help drawings planets in Java.

    - by d33j
    I am looking for help/links/notes/agorithms/URLs/examples on drawing/rendering spheres in pure Java (so that I can hopefully, one day, generate/render planets with various surfaces & atmospheres) So for the moment, i'd be pretty happy to be able to start off with just drawing a wireframed sphere(s). ps: I don't want to use external libraries like Java3D, JOGL or aftermarket engines like JMonkeyEngine, Would rather keep it as straight Java.

    Read the article

  • Component-based Rendering

    - by Kikaimaru
    I have component Renderer, that Draws Texture2D (or sprite) According to component-based architecture i should have only method OnUpdate, and there should be my rendering code, something like spriteBatch.Draw(Texture, Vector2.Zero, Color.White) But first I need to do spriteBatch.Begin();. Where should i call it? And how can I make sure it's called before any Renderer components OnUpdate method? (i need to do more stuff then just Begin() i also need to set right rendertarget for camera etc.)

    Read the article

  • Unity3D problem. Bullets fall down instead of flying like they should

    - by user2342080
    I used this tutorial as a reference. http://www.youtube.com/watch?v=3L8eaoyZ0Go My problem is that whenever I play the game, EVERYTHING works but the bullets. It just falls down instead of flying forward. This is the flash version of the game: http://v1k.me/swf/ Can some one help me out? Should I upload the project? This is my "Shoot.js": public var bulletPrefab : Transform; public var bulletSpeed : float = 20; function Update() { if(Input.GetMouseButton(0)) { if(bulletPrefab || bulletSpeed) { var bulletCreate = Instantiate(bulletPrefab, GameObject.Find("SpawnPoint").transform.position, Quaternion.identity); bulletCreate.rigidbody.AddForce(transform.forward * bulletSpeed); } } }

    Read the article

  • ways to program glitch style effects

    - by okkk
    Most tutorials for generating glitch art usually has to do with some form of manipulation of the compression of files. Should my goal instead to replicate the look of these glitches in shaders or is it somehow possible to authentically generate the compression artifacts in real time? Example: This effect which I'm particularly interested is referred to as datamoshing. It does "things" using the p-frames of a video (frames that I think store just the change in pixels). I feel like I need a better understanding of both graphics programming and data-compression.

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >