Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 542/1505 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • How to fetch only the sprites in the player's range of motion for collision testing? (2D, axis aligned sprites)

    - by Twodordan
    I am working on a 2D sprite game for educational purposes. (In case you want to know, it uses WebGl and Javascript) I've implemented movement using the Euler method (and delta time) to keep things simple. Now I'm trying to tackle collisions. The way I wrote things, my game only has rectangular sprites (axis aligned, never rotated) of various/variable sizes. So I need to figure out what I hit and which side of the target sprite I hit (and I'm probably going to use these intersection tests). The old fashioned method seems to be to use tile based grids, to target only a few tiles at a time, but that sounds silly and impractical for my game. (Splitting the whole level into blocks, having each sprite's bounding box fit multiple blocks I might abide. But if the sprites change size and move around, you have to keep changing which tiles they belong to, every frame, it doesn't sound right.) In Flash you can test collision under one point, but it's not efficient to iterate through all the elements on stage each frame. (hence why people use the tile method). Bottom line is, I'm trying to figure out how to test only the elements within the player's range of motion. (I know how to get the range of motion, I have a good idea of how to write a collisionCheck(playerSprite, targetSprite) function. But how do I know which sprites are currently in the player's vicinity to fetch only them?) Please discuss. Cheers!

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • Dynamic obstacles avoidance in navigation mesh system

    - by Variable
    I've built my path finding system with unreal engine, somehow the path finding part works just fine while i can't find a proper way to solve dynamic obstacles avoidance problem. My characters are walking allover the map and collide with each other while they moving. I try to steering them when collision occurs, but this doesn't work well. For example, two characters block on the road while the third one's path is right in the middle of them and he'll get stuck. Can someone tell me the most popular way of doing dynamic avoidance? Thanks a lot.

    Read the article

  • How to store character moves (sprite animations)?

    - by Saad
    So I'm thinking about making a small rpg, mainly to test out different design patterns I've been learning about. But the one question that I'm not too sure on how to approach is how to store an array of character moves in the best way possible. So let's say I have arrays of different sprites. This is how I'm thinking about implementing it: array attack = new array (10); array attack2 = new array(5); (loop) //blit some image attack.push(imageInstance); (end loop) Now every time I want the animation I call on attack or attack2; is there a better structure? The problem with this is let's say there are 100 different attacks, and a player can have up to 10 attacks equipped. So how do I tell which attack the user has; should I use a hash map?

    Read the article

  • Making organic 2D tilemaps for tile based games...

    - by Codejoy
    So I have always wondered how one makes a nice (not so squarish) 2d tile map, is it possible? all games now days I think use textured polygons...but my game engine (and engine) doesn't support that to my knowledge. But it does support nice TMX files generated by mapeditor.org's Tiled Map Editor. Though in my game I want nice twisting and turning caverns to traverse ... I was wondering some ideas on such a process... is it in the art style? The type of tile engine? both? So what are some common techniques?

    Read the article

  • Changing the material on an object on click in unity

    - by user1509674
    Iam working on unity2d.I have six game object Object1,Object1,Object1,(these are images) ObjectImage1,ObjectImage2,ObjectImage3(these are images). I have arranged the object in the scene as a list one below another Object1 Object2 Object3 When I click the Object1 --- should change to ObjectImage1 Object2 ----should change to ObjectImage2, but the above image of object1(objectImage1) at present should change to Object1 Object3 ----? should change to ObjectImage3,but the above image on object2(objectImage2) should change to Object2 These is similar to selection.I have coded Like when I click of Object2 its changing to ObjectIamge2 but the first object is not changing to object1 from objectImage1.Can anybody help me coding it out. Edit: public GameObject newSprite; private Vector3 currentSpritePosition; void Start() { newSprite.renderer.enabled = false; currentSpritePosition = transform.position; //then make it invisible renderer.enabled = false; //give the new sprite the position of the latter newSprite.transform.position = currentSpritePosition; //then make it visible newSprite.renderer.enabled = true; } void OnMouseExit(){ //just the reverse process renderer.enabled = true; newSprite.renderer.enabled = false; } This is the code used to change the material: public GameObject newSprite; private Vector3 currentSpritePosition; void Start(){ newSprite.renderer.enabled = false; } void OnMouseEnter(){ //getting the current position of the current sprite if ever it can move; currentSpritePosition = transform.position; //then make it invisible renderer.enabled = false; //give the new sprite the position of the latter newSprite.transform.position = currentSpritePosition; //then make it visible newSprite.renderer.enabled = true; } void OnMouseExit(){ //just the reverse process renderer.enabled = true; newSprite.renderer.enabled = false; }

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • How do I make time?

    - by SystemNetworks
    I wanted to output a text for a certain amount of time. One way is to use threads. Are there any other ways? I can't use threads for slick2d. This is my code when I use threads for slick: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import java.util.Random; import org.newdawn.slick.Input; import org.newdawn.slick.*; import org.newdawn.slick.state.*; import org.lwjgl.input.Mouse; public class thread1 implements Runnable { String showUp; int timeLeft; public thread1(String s) { s = showUp; } public void run(Graphics g) { try { g.drawString("%s is sleeping %d", 500, 500); Thread.sleep(timeLeft); g.drawString("%s is awake", 600,600); } catch(Exception e) { } } @Override public void run() { // TODO Auto-generated method stub run(); } } It auto generates a new run() And also when I call it to my main class it has stack overflow!

    Read the article

  • How to move an UIView along a curved CGPath according to user dragging the view

    - by Felipe Cypriano
    I'm trying to build a interface that the user can move his finger around the screen an a list of images moves along a path. The idea is that the images center nevers leaves de path. Most of the things I found was about how to animate using CGPath and not about actually using the path as the track to a user movement. I need to objects to be tracked on the path even if the user isn't moving his fingers over the path. For example (image bellow), if the object is at the beginning of the path and the user touches anywhere on the screen and moves his fingers from left to right I need that the object moves from left to right but following the path, that is, going up as it goes to the right towards the path's end. This is the path I've draw, imagine that I'll have a view (any image) that the user can touch and drag it along the path, there's no need to move the finger exactly over the path. If the user move from left to right the image should move from left to right but going up if need following the path. This is how I'm creating the path: CGPoint endPointUp = CGPointMake(315, 124); CGPoint endPointDown = CGPointMake(0, 403); CGPoint controlPoint1 = CGPointMake(133, 187); CGPoint controlPoint2 = CGPointMake(174, 318); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, endPointUp.x, endPointUp.y); CGPathAddCurveToPoint(path, NULL, controlPoint1.x, controlPoint1.y, controlPoint2.x, controlPoint2.y, endPointDown.x, endPointDown.y); Any idead how can I achieve this?

    Read the article

  • Basic tutorial/introduction for 3d matrices, idealy in c++, without openGl or directX

    - by René Nyffenegger
    I am wondering if there is a simple tutorial that covers the basics of how to initialize rotation, translation and projection matrices, and how to multiply them, and how to get the screen coordinates afterwards for a 3d point. Idealy, the tutorial comes with compilable code and is not dependent on any 3rd party library. Searching the internet, I found lots of tutorials, so this is not the problem. Yet, it seemed all of these either covered openGl or directX, or they were theoretical in nature.

    Read the article

  • Can I use DllImport/PInvoke in libraries loaded as Assets in Unity Free?

    - by sebf
    I am interested in using utilising third-party libraries in Unity Free. I know Unity can use managed libraries as Assets, but only the Pro version supports using native libraries. (DllImport within scripts). This thread however suggests that it is possible to import DLLs in the free version. I would like to utilise native libraries (as a hobbyist I cannot afford Pro), but want to do it the supported way so I don't have to worry about Unity 'fixing' this hole if that is what it is. Is there any supported way to use native libraries with Unity free? (i.e. does that thread suggest a workaround or is it a 'bug'? Is it supported to use DllImport/PInvoke in libraries loaded as assets? (could I create a wrapper myself?)

    Read the article

  • How a "Collision System" should be implemented?

    - by nathan
    My game is written using a entity system approach using Artemis Framework. Right know my collision detection is called from the Movement System but i'm wondering if it's a proper way to do collision detection using such an approach. Right know i'm thinking of a new system dedicated to collision detection that would proceed all the solid entities to check if they are in collision with another one. I'm wondering if it's a correct way to handle collision detection with an entity system approach? Also, how should i implement this collision system? I though of an IntervalEntitySystem that would check every 200ms (this value is chosen regarding the Artemis documentation) if some entities are colliding. protected void processEntities(ImmutableBag<Entity> ib) { for (int i = 0; i < ib.size(); i++) { Entity e = ib.get(i); //check of collision with other entities here } }

    Read the article

  • What is the format of DXGI_FORMAT_D24_UNORM_S8_UINT?

    - by bobobobo
    I'm trying to read the values in a depth texture of type DXGI_FORMAT_D24_UNORM_S8_UINT. I know this means "24 bits for depth, 8 bits for stencil" "A 32-bit z-buffer format that supports 24 bits for depth and 8 bits for stencil.", but how do you interpret those 24 bits? It's clearly not going to be a 32-bit int, and it's not going to be a 32-bit float. If it is an integer value, how "far away" is a value of "1" in the depth texture?

    Read the article

  • Make Pong on android using OpenGL-ES

    - by brainydexter
    I am trying to make a simple pong game using opengl-es. I have checked out some of the tutorials/samples, but most of them are pre-dated to 2009. I am familiar with game programming, and consider pong to be the hello-world! Right now, I intend to make it using their supplied SDK (2.3), but eventually I want to make it in NDK, so I can port my other work to android. Would anyone have a good reference for a starting point ? Thanks

    Read the article

  • Corona SDK: Animation takes a long time to play after "prepare" step

    - by Michael Taufen
    First off, I'm using the current publicly available build, version 2011.704 I'm building a platformer, and have a character that runs along and jumps when the screen is tapped. While jumping, the animation code has him assume a svelte jumping pose, and upon the detection of a collision with the ground, he returns to running. All of this happens. The problem is that there is this strange gap of time, about 1/2 a second by the feel of it, where my character sits on the first frame of the run animation after landing, before it actually starts playing. This leads me to believe that the problem is somewhere between the "prepare" step of loading up a sprite set's animation sequence and the "play" step. Thanks in advance for any help :). My code for when my character lands is as follows: local function collisionHandler ( event ) if (event.object1.myName == "character") and (event.object2.type == "terrain") then inAir = false characterInstance:prepare( "run" ) -- TODO: time between prepare and play is curiously long... characterInstance:play() end end

    Read the article

  • How can I view an R32G32B32 texture?

    - by bobobobo
    I have a texture with R32G32B32 floats. I create this texture in-program on D3D11, using DXGI_FORMAT_R32G32B32_FLOAT. Now I need to see the texture data for debug purposes, but it will not save to anything but dds, showing the error in debug output, "Can't find matching WIC format, please save this file to a DDS". So, I write it to DDS but I can't open it now! The DirectX texture tool says "An error occurred trying to open that file". I know the texture is working because I can read it in the GPU and the colors seem correct. How can I view an R32G32B32 texture in an image viewer?

    Read the article

  • Realistic planetary terrain generation with weights

    - by Programmdude
    I need terrain generation for a planet. The planet will be divided up into several hundred hexes, and I need it to be realistic and based on weights. I have dabbled in terrain generation before, but nothing like this. So I figure it would be a good idea to ask the community for answers, recommended articles or the like. By realistic, I mean not just random hexes, but continent shaped things with a few islands. More desert around the equator and more ice around the poles. I also have two weights I need to base it around: ice percentage and water percentage. That means that around XX% of the planet will need to be water. Does anyone have any advice or places to start? Generating arbitrary terrain is easy, but something a bit more "organic" like this seems rather difficult. It also needs to be seamless. Should be obvious since it's a planet, but no harm in pointing it out.

    Read the article

  • What light attenuation function does UDK use?

    - by ananamas
    I'm a big fan of the light attenuation in UDK. Traditionally I've always used the constant-linear-quadratic falloff function to control how "soft" the falloff is, which gives three values to play with. In UDK you can get similar results, but you only need to tweak one value: FalloffExponent. I'm interested in what the actual mathematical function here is. The UDK lighting reference describes it as follows: FalloffExponent: This allows you to modify the falloff of a light. The default falloff is 2. The smaller the number, the sharper the falloff and the more the brightness is maintained until the radius is reached. Does anyone know what it's doing behind the scenes?

    Read the article

  • Trying to implement fling events on an object

    - by Adam Short
    I have a game object, well a bitmap, which I'd like to "fling". I'm struggling to get it to fling ontouchlistener due to it being a bitmap and not sure how to proceed and I'm struggling to find the resources to help. Here's my code so far: https://github.com/addrum/Shapes GameActivity class: package com.main.shapes; import android.app.Activity; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.os.Bundle; import android.view.GestureDetector; import android.view.MotionEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View.OnTouchListener; import android.view.Window; public class GameActivity extends Activity { private GestureDetector gestureDetector; View view; Bitmap ball; float x, y; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //Remove title bar this.requestWindowFeature(Window.FEATURE_NO_TITLE); view = new View(this); ball = BitmapFactory.decodeResource(getResources(), R.drawable.ball); gestureDetector = new GestureDetector(this, new GestureListener()); x = 0; y = 0; setContentView(view); ball.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(android.view.View v, MotionEvent event) { // TODO Auto-generated method stub return false; } }); } @Override protected void onPause() { super.onPause(); view.pause(); } @Override protected void onResume() { super.onResume(); view.resume(); } public class View extends SurfaceView implements Runnable { Thread thread = null; SurfaceHolder holder; boolean canRun = false; public View(Context context) { super(context); holder = getHolder(); } public void run() { while (canRun) { if (!holder.getSurface().isValid()) { continue; } Canvas c = holder.lockCanvas(); c.drawARGB(255, 255, 255, 255); c.drawBitmap(ball, x - (ball.getWidth() / 2), y - (ball.getHeight() / 2), null); holder.unlockCanvasAndPost(c); } } public void pause() { canRun = false; while (true) { try { thread.join(); } catch (InterruptedException e) { e.printStackTrace(); } break; } thread = null; } public void resume() { canRun = true; thread = new Thread(this); thread.start(); } } } GestureListener class: package com.main.shapes; import android.view.GestureDetector.SimpleOnGestureListener; import android.view.MotionEvent; public class GestureListener extends SimpleOnGestureListener { private static final int SWIPE_MIN_DISTANCE = 120; private static final int SWIPE_THRESHOLD_VELOCITY = 200; @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { if (e1.getX() - e2.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { //From Right to Left return true; } else if (e2.getX() - e1.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { //From Left to Right return true; } if (e1.getY() - e2.getY() > SWIPE_MIN_DISTANCE && Math.abs(velocityY) > SWIPE_THRESHOLD_VELOCITY) { //From Bottom to Top return true; } else if (e2.getY() - e1.getY() > SWIPE_MIN_DISTANCE && Math.abs(velocityY) > SWIPE_THRESHOLD_VELOCITY) { //From Top to Bottom return true; } return false; } @Override public boolean onDown(MotionEvent e) { //always return true since all gestures always begin with onDown and<br> //if this returns false, the framework won't try to pick up onFling for example. return true; } }

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • Help comparing Cocos2d and Unity3d for this project.....

    - by Omega
    I will not go into details, but I would like to hear your opinions about this: Essentially, my project will be a 2d game, with lots of complex levels, where some might be simple and others might be a bit more deep, with physics, etc. We want to implement our very own online structure: logging in, leaderboards, achievements, friends etc with our own servers. This means no OpenFeint nor GameCenter at all. We expect this game to be very large in both graphics and audio. We wish to use in-app purchases. Now, we considered two options. Cocos2d and Unity3d. We need help deciding using the factors I mentioned before (networking, good performance even for a large game in terms of graphics and audio like this, in-app purchases, etc) which option would fit better this? Technically, both options can create 2d games. I'd like to hear your opinion.

    Read the article

  • GameState management hierarchical FSM vs stack based FSM

    - by user8363
    I'm reading a bit on Finite State Machines to handle game states (or screens). I would like to build a rather decent FSM that can handle multiple screens. e.g. while the game is running I want to be able to pop-up an ingame menu and when that happens the main screen must stop updating (the game is paused) but must still be visible in the background. However when I open an inventory pop-up the main screen must be visible and continue updating etc. I'm a bit confused about the difference in implementation and functionality between hierarchical FSM's and FSM's that handle a stack of states instead. Are they basically the same? Or are there important differences?

    Read the article

  • What are the cons of using DrawableGameComponent for every instance of a game object?

    - by Kensai
    I've read in many places that DrawableGameComponents should be saved for things like "levels" or some kind of managers instead of using them, for example, for characters or tiles (Like this guy says here). But I don't understand why this is so. I read this post and it made a lot of sense to me, but these are the minority. I usually wouldn't pay too much attention to things like these, but in this case I would like to know why the apparent majority believes this is not the way to go. Maybe I'm missing something.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Ouya build experiencing odd graphical artifacts, screen half black

    - by Neeko
    I'm witnessing very odd graphical artifacts when I run my Unity game on Ouya. After the Unity splash screen, the game loads with the screen half black. This seemingly has started occurring out of the blue. It also doesn't occur in the editor or the standalone build, only on Ouya. I can't think of a single reason why this would be happening. If I open the Ouya menu screen and close it, the game returns to normal; somewhat, as there may be some artifacts lingering but the screen isn't half black like in the screen shot above. I know there's not much to go off of, but any insight into why this may be happening is greatly appreciated.

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >