Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 12/32 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • 12.04 tablet cursor jumps around when clicking

    - by Larry
    After updating to 12.04 i had to re-install wizardpen-driver (https://help.ubuntu.com/community/TabletSetupWizardpen) to use my Trust tablet. But now when I change the Coordinate Transformation Matrix to anything else but the identity matrix, moving the pen moves the cursor normally, but when I click (or press with the pen) the cursor jumps to the edge of the screen, seemingly depending on how I edit the matrix. I know the tablet isn't broken since this behaviour doesn't occur if I leave the matrix as an identity matrix, and the tablet works perfectly on Windows. -L

    Read the article

  • i'm confused what skill shall i learn at my internship

    - by iyad al aqel
    i'm a software Engineering student having my internship this summer. the company asked me to choose one or two skills that i want to master and they will coordinate me and give me small tasks to medium projects to master it . Now , i'm confused shall i continue with web development and learn .NET given that i've been working with PHP for 2 years OR entering the mobile development world and learn Android. any advice guys ?

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • Projectile Rotation

    - by Alex
    I'm trying to add a projectile system like the projectiles in Realm Of The Mad God. (YouTube it to see what I mean) These projectiles seem to move according to their rotation perfectly and can have nearly any rotation. They also have near perfect hitboxing. What's the maths behind this? My Game works on an integer-based coordinate system, but at the moment projectiles can only shoot either 0, 45, 90, 135, 180, 225, 270 and 315 degrees.

    Read the article

  • Determining if something is on the right or left side of an object?

    - by meds
    I have a character in a 3D world which is facing an arbitrary direction on a flat plane, the player can click on the left or right side of the character and based on which side is clicked on a different action happens. How can I determine which side the click occured on? Obviously for straight on ahead (0,0,1) I can simply use the x coordinate of the click point to determine if it's the left or right hand side, but what about other cases?

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Improving the Extended Financial Close and Reporting Process

    Coming out of the recession, many organizations need to build or re-build trust with key stakeholders by delivering more timely and accurate financial and operating results. In this podcast, hear about new capabilities Oracle is delivering through its Enterprise Performance Management products to help organizations coordinate and improve the extended financial close and reporting process, from closing the sub-ledgers to regulatory filings.

    Read the article

  • Improving the Extended Financial Close and Reporting Process

    Coming out of the recession, many organizations need to build or re-build trust with key stakeholders by delivering more timely and accurate financial and operating results. In this podcast, hear about new capabilities Oracle is delivering through its Enterprise Performance Management products to help organizations coordinate and improve the extended financial close and reporting process, from closing the sub-ledgers to regulatory filings.

    Read the article

  • IOException: Unable To Delete Images Due To File Lock

    - by Arslan Pervaiz
    I am Unable To Delete Image File From My Server Path It Gaves Error That The Process Cannot Access The File "FileName" Because it is being Used By Another Process. I Tried Many Methods But Still All In Vain. Please Help me Out in This Issue. Here is My Code Snippet. using System; using System.Data; using System.Web; using System.Data.SqlClient; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Globalization; using System.Web.Security; using System.Text; using System.DirectoryServices; using System.Collections; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Drawing.Drawing2D; //============ Main Block ================= byte[] data = (byte[])ds.Tables[0].Rows[0][0]; MemoryStream ms = new MemoryStream(data); Image returnImage = Image.FromStream(ms); returnImage.Save(Server.MapPath(".\\TmpImages\\SavedImage.jpg"), System.Drawing.Imaging.ImageFormat.Jpeg); returnImage.Dispose(); \\ I Tried this Dispose Method To Unlock The File But Nothing Done. ms.Close(); \\ I Tried The Memory Stream Close Method Also But Its Also Not Worked For Me. watermark(); \\ Here is My Water Mark Method That Print Water Mark Image on My Saved Image (Image That is Converted From Byte Array) DeleteImages(); \\ Here is My Delete Method That I Call To Delete The Images //===== ==== My Delete Method To Delete Files================== public void DeleteImages() { try { File.Delete(Server.MapPath(".\\TmpImages\\WaterMark.jpg")); \\This Image Deleted Fine. File.Delete(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); \\ Exception Thrown On Deleting of This Image. } catch (Exception ex) { LogManager.LogException(ex, "Error in Deleting Images."); Master.ShowMessage(ex.Message, true); } } \ ==== Method Declartion That Make Watermark of One Image On Another Image.======= public void watermark() { //create a image object containing the photograph to watermark Image imgPhoto = Image.FromFile(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); int phWidth = imgPhoto.Width; int phHeight = imgPhoto.Height; //create a Bitmap the Size of the original photograph Bitmap bmPhoto = new Bitmap(phWidth, phHeight, PixelFormat.Format24bppRgb); bmPhoto.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //load the Bitmap into a Graphics object Graphics grPhoto = Graphics.FromImage(bmPhoto); //create a image object containing the watermark Image imgWatermark = new Bitmap(Server.MapPath(".\\TmpImages\\PrintasWatermark.jpg")); int wmWidth = imgWatermark.Width; int wmHeight = imgWatermark.Height; //Set the rendering quality for this Graphics object grPhoto.SmoothingMode = SmoothingMode.AntiAlias; //Draws the photo Image object at original size to the graphics object. grPhoto.DrawImage( imgPhoto, // Photo Image object new Rectangle(0, 0, phWidth, phHeight), // Rectangle structure 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. phWidth, // Width of the portion of the source image to draw. phHeight, // Height of the portion of the source image to draw. GraphicsUnit.Pixel); // Units of measure //------------------------------------------------------- //to maximize the size of the Copyright message we will //test multiple Font sizes to determine the largest posible //font we can use for the width of the Photograph //define an array of point sizes you would like to consider as possiblities //------------------------------------------------------- //Define the text layout by setting the text alignment to centered StringFormat StrFormat = new StringFormat(); StrFormat.Alignment = StringAlignment.Center; //define a Brush which is semi trasparent black (Alpha set to 153) SolidBrush semiTransBrush2 = new SolidBrush(Color.FromArgb(153, 0, 0, 0)); //define a Brush which is semi trasparent white (Alpha set to 153) SolidBrush semiTransBrush = new SolidBrush(Color.FromArgb(153, 255, 255, 255)); //------------------------------------------------------------ //Step #2 - Insert Watermark image //------------------------------------------------------------ //Create a Bitmap based on the previously modified photograph Bitmap Bitmap bmWatermark = new Bitmap(bmPhoto); bmWatermark.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //Load this Bitmap into a new Graphic Object Graphics grWatermark = Graphics.FromImage(bmWatermark); //To achieve a transulcent watermark we will apply (2) color //manipulations by defineing a ImageAttributes object and //seting (2) of its properties. ImageAttributes imageAttributes = new ImageAttributes(); //The first step in manipulating the watermark image is to replace //the background color with one that is trasparent (Alpha=0, R=0, G=0, B=0) //to do this we will use a Colormap and use this to define a RemapTable ColorMap colorMap = new ColorMap(); //My watermark was defined with a background of 100% Green this will //be the color we search for and replace with transparency colorMap.OldColor = Color.FromArgb(255, 0, 255, 0); colorMap.NewColor = Color.FromArgb(0, 0, 0, 0); ColorMap[] remapTable = { colorMap }; imageAttributes.SetRemapTable(remapTable, ColorAdjustType.Bitmap); //The second color manipulation is used to change the opacity of the //watermark. This is done by applying a 5x5 matrix that contains the //coordinates for the RGBA space. By setting the 3rd row and 3rd column //to 0.3f we achive a level of opacity float[][] colorMatrixElements = { new float[] {1.0f, 0.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 1.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 1.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.3f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.0f, 1.0f}}; ColorMatrix wmColorMatrix = new ColorMatrix(colorMatrixElements); imageAttributes.SetColorMatrix(wmColorMatrix, ColorMatrixFlag.Default, ColorAdjustType.Bitmap); //For this example we will place the watermark in the upper right //hand corner of the photograph. offset down 10 pixels and to the //left 10 pixles int xPosOfWm = ((phWidth - wmWidth) - 10); int yPosOfWm = 10; grWatermark.DrawImage(imgWatermark, new Rectangle(xPosOfWm, yPosOfWm, wmWidth, wmHeight), //Set the detination Position 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. wmWidth, // Watermark Width wmHeight, // Watermark Height GraphicsUnit.Pixel, // Unit of measurment imageAttributes); //ImageAttributes Object //Replace the original photgraphs bitmap with the new Bitmap imgPhoto = bmWatermark; grPhoto.Dispose(); grWatermark.Dispose(); //save new image to file system. imgPhoto.Save(Server.MapPath(".\\TmpImages\\WaterMark.jpg"), ImageFormat.Jpeg); imgPhoto.Dispose(); imgWatermark.Dispose(); }

    Read the article

  • How to convert this procedural programming to object-oriented programming?

    - by manus91
    I have a source code that is needed to be converted by creating classes, objects and methods. So far, I've just done by converting the initial main into a separate class. But I don't know what to do with constructor and which variables are supposed to be private. This is the code : import java.util.*; public class Card{ private static void shuffle(int[][] cards){ List<Integer> randoms = new ArrayList<Integer>(); Random randomizer = new Random(); for(int i = 0; i < 8;) { int r = randomizer.nextInt(8)+1; if(!randoms.contains(r)) { randoms.add(r); i++; } } List<Integer> clonedList = new ArrayList<Integer>(); clonedList.addAll(randoms); Collections.shuffle(clonedList); randoms.addAll(clonedList); Collections.shuffle(randoms); int i=0; for(int r=0; r < 4; r++){ for(int c=0; c < 4; c++){ cards[r][c] = randoms.get(i); i++; } } } public static void play() throws InterruptedException { int ans = 1; int preview; int r1,c1,r2,c2; int[][] cards = new int[4][4]; boolean[][] cardstatus = new boolean[4][4]; boolean gameover = false; int moves; Scanner input = new Scanner(System.in); do{ moves = 0; shuffle(cards); System.out.print("Enter the time(0 to 5) in seconds for the preview of the answer : "); preview = input.nextInt(); while((preview<0) || (preview>5)){ System.out.print("Invalid time!! Re-enter time(0 - 5) : "); preview = input.nextInt(); } preview = 1000*preview; System.out.println(" "); for (int i =0; i<4;i++){ for (int j=0;j<4;j++){ System.out.print(cards[i][j]); System.out.print(" "); } System.out.println(""); System.out.println(""); } Thread.sleep(preview); for(int b=0;b<25;b++){ System.out.println(" "); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ System.out.print("*"); System.out.print(" "); cardstatus[r][c] = false; } System.out.println(""); System.out.println(" "); } System.out.println(""); do{ do{ System.out.print("Please insert the first card row : "); r1 = input.nextInt(); while((r1<1) || (r1>4)){ System.out.print("Invalid coordinate!! Re-enter first card row : "); r1 = input.nextInt(); } System.out.print("Please insert the first card column : "); c1 = input.nextInt(); while((c1<1) || (c1>4)){ System.out.print("Invalid coordinate!! Re-enter first card column : "); c1 = input.nextInt(); } if(cardstatus[r1-1][c1-1] == true){ System.out.println("The card is already flipped!! Select another card."); System.out.println(""); } }while(cardstatus[r1-1][c1-1] != false); do{ System.out.print("Please insert the second card row : "); r2 = input.nextInt(); while((r2<1) || (r2>4)){ System.out.print("Invalid coordinate!! Re-enter second card row : "); r2 = input.nextInt(); } System.out.print("Please insert the second card column : "); c2 = input.nextInt(); while((c2<1) || (c2>4)){ System.out.print("Invalid coordinate!! Re-enter second card column : "); c2 = input.nextInt(); } if(cardstatus[r2-1][c2-1] == true){ System.out.println("The card is already flipped!! Select another card."); } if((r1==r2)&&(c1==c2)){ System.out.println("You can't select the same card twice!!"); continue; } }while(cardstatus[r2-1][c2-1] != false); r1--; c1--; r2--; c2--; System.out.println(""); System.out.println(""); System.out.println(""); for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if((r==r1)&&(c==c1)){ System.out.print(cards[r][c]); System.out.print(" "); } else if((r==r2)&&(c==c2)){ System.out.print(cards[r][c]); System.out.print(" "); } else if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(" "); System.out.println(" "); } System.out.println(""); if(cards[r1][c1] == cards[r2][c2]){ System.out.println("Cards Matched!!"); cardstatus[r1][c1] = true; cardstatus[r2][c2] = true; } else{ System.out.println("No cards match!!"); } Thread.sleep(2000); for(int b=0;b<25;b++){ System.out.println(""); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(""); System.out.println(" "); } System.out.println(""); System.out.println(""); System.out.println(""); gameover = true; for(int r=0;r<4;r++){ for( int c=0;c<4;c++){ if(cardstatus[r][c]==false){ gameover = false; break; } } if(gameover==false){ break; } } moves++; }while(gameover != true); System.out.println("Congratulations, you won!!"); System.out.println("It required " + moves + " moves to finish it."); System.out.println(""); System.out.print("Would you like to play again? (1=Yes / 0=No) : "); ans = input.nextInt(); }while(ans == 1); } } The main class is: import java.util.*; public class PlayCard{ public static void main(String[] args) throws InterruptedException{ Card game = new Card(); game.play(); } } Should I simplify the Card class by creating other classes? Through this code, my javadoc has no constructtor. So i need help on this!

    Read the article

  • setTimeout in javascript not giving browser 'breathing room'

    - by C Bauer
    Alright, I thought I had this whole setTimeout thing perfect but I seem to be horribly mistaken. I'm using excanvas and javascript to draw a map of my home state, however the drawing procedure chokes the browser. Right now I'm forced to pander to IE6 because I'm in a big organisation, which is probably a large part of the slowness. So what I thought I'd do is build a procedure called distributedDrawPolys (I'm probably using the wrong word there, so don't focus on the word distributed) which basically pops the polygons off of a global array in order to draw 50 of them at a time. This is the method that pushes the polygons on to the global array and runs the setTimeout: for (var x = 0; x < polygon.length; x++) { coordsObject.push(polygon[x]); fifty++; if (fifty > 49) { timeOutID = setTimeout(distributedDrawPolys, 5000); fifty = 0; } } I put an alert at the end of that method, it runs in practically a second. The distributed method looks like: function distributedDrawPolys() { if (coordsObject.length > 0) { for (x = 0; x < 50; x++) { //Only do 50 polygons var polygon = coordsObject.pop(); var coordinate = polygon.selectNodes("Coordinates/point"); var zip = polygon.selectNodes("ZipCode"); var rating = polygon.selectNodes("Score"); if (zip[0].text.indexOf("HH") == -1) { var lastOriginCoord = []; for (var y = 0; y < coordinate.length; y++) { var point = coordinate[y]; latitude = shiftLat(point.getAttribute("lat")); longitude = shiftLong(point.getAttribute("long")); if (y == 0) { lastOriginCoord[0] = point.getAttribute("long"); lastOriginCoord[1] = point.getAttribute("lat"); } if (y == 1) { beginPoly(longitude, latitude); } if (y > 0) { if (translateLongToX(longitude) > 0 && translateLongToX(longitude) < 800 && translateLatToY(latitude) > 0 && translateLatToY(latitude) < 600) { drawPolyPoint(longitude, latitude); } } } y = 0; if (zip[0].text != targetZipCode) { if (rating[0] != null) { if (rating[0].text == "Excellent") { endPoly("rgb(0,153,0)"); } else if (rating[0].text == "Good") { endPoly("rgb(153,204,102)"); } else if (rating[0].text == "Average") { endPoly("rgb(255,255,153)"); } } else { endPoly("rgb(255,255,255)"); } } else { endPoly("rgb(255,0,0)"); } } } } Ugh I don't know if that is properly formatted, I ended up with an extra bracket < So I thought the setTimeout method would allow the site to draw the polygons in groups so the users would be able to interact with the page while it was still drawing. What am I doing wrong here?

    Read the article

  • How can I change mouse keymapping

    - by zuberuber
    I have Razer DeathAdder(left handed edition) and A4Tech wireless mouse. My problem is I don't know how to change wireless mouse keymapping(swaping left/right click). Can somebody guide me how to do such thing? List of my devices: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech Unifying Device. Wireless PID:4004 id=8 [slave pointer (2)] ? ? Razer Razer DeathAdder id=11 [slave pointer (2)] ? ? A4TECH USB Device id=12 [slave pointer (2)] ? ? A4TECH USB Device id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Logitech USB Keyboard id=9 [slave keyboard (3)] ? Logitech USB Keyboard id=10 [slave keyboard (3)] This is my Razer xinput: Device 'Razer Razer DeathAdder': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 5.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 5426, 22 Device Node (241): "/dev/input/event4" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0 And this is my wireless mouse xinput: Device 'A4TECH USB Device': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 1.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 2522, 1359 Device Node (241): "/dev/input/event16" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Horiz Wheel" (245), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • Ray-box Intersection Theory

    - by Myx
    Hello: I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points. Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection. The way I check whether the plane-intersection point is on the box surface itself is through a function bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2) { double min_x = min(corner1.X(), corner2.X()); double max_x = max(corner1.X(), corner2.X()); double min_y = min(corner1.Y(), corner2.Y()); double max_y = max(corner1.Y(), corner2.Y()); double min_z = min(corner1.Z(), corner2.Z()); double max_z = max(corner1.Z(), corner2.Z()); if(point.X() >= min_x && point.X() <= max_x && point.Y() >= min_y && point.Y() <= max_y && point.Z() >= min_z && point.Z() <= max_z) return true; return false; } where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm. Thanks.

    Read the article

  • MIPS return address in main

    - by Alexander
    I am confused why in the code below I need to decrement the stack pointer and store the return address again. If I don't do that... then PCSpim keeps on looping.. Why is that? ######################################################################################################################## ### main ######################################################################################################################## .text .globl main main: addi $sp, $sp, -4 # Make space on stack sw $ra, 0($sp) # Save return address # Start test 1 ############################################################ la $a0, asize1 # 1st parameter: address of asize1[0] la $a1, frame1 # 2nd parameter: address of frame1[0] la $a2, window1 # 3rd parameter: address of window1[0] jal vbsme # call function # Printing $v0 add $a0, $v0, $zero # Load $v0 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Printing $v1 add $a0, $v1, $zero # Load $v1 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall ############################################################ # End of test 1 lw $ra, 0($sp) # Restore return address addi $sp, $sp, 4 # Restore stack pointer jr $ra # Return ######################################################################################################################## ### vbsme ######################################################################################################################## #.text .globl vbsme vbsme: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address exit: add $v1, $t5, $zero # (v1) x coordinate of the block in the frame with the minimum SAD add $v0, $t4, $zero # (v0) y coordinate of the block in the frame with the minimum SAD lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer jr $ra # return If I delete: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address and lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer on vbsme: PCSpim keeps on running... Why??? I shouldn't have to increment/decrement the stack pointer on vbsme and then do the jr again right? The jal in main is supposed to handle that

    Read the article

  • CGContext - PDF margin

    - by Manoj Khaylia
    Hi All I am showing PDF content on a view using this code using Quartz Sample // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. CGContextTranslateCTM(context, 0.0, self.bounds.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(pdf, pageNo); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context); Using this all works fine I want to set the margin for the PDF context, bydefault it showing 50 px margin in every side.. I have tried CGContext methods but not got the appropriate one. Can any body help me regarding this Thanks Monaj

    Read the article

  • Objective C / iPhone comparing 2 CLLocations /GPS coordinates

    - by user289503
    have an app that finds your GPS location successfully, but I need to be able to compare that GPS with a list of GPS locations, if both are the same , then you get a bonus. I thought I had it working, but it seems not. I have 'newLocation' as the location where you are, I think the problem is that I need to be able to seperate the long and lat data of newLocation. So far ive tried this: Code: NSString *latitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.latitude]; NSString *longitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.longitude]; An example of the list of GPS locations: Code: location:(CLLocation*)newLocation; CLLocationCoordinate2D bonusOne; bonusOne.latitude = 37.331689; bonusOne.longitude = -122.030731; and then Code: if (latitudeVar == bonusOne.latitude && longitudeVar == bonusOne.longitude) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"infinite loop firday" message:@"infloop" delegate:nil cancelButtonTitle:@"Stinky" otherButtonTitles:nil ]; [alert show]; [alert release]; this comes up with an error 'invalid operands to binary == have strut NSstring and CLlocationDegrees' Any thoughts?

    Read the article

  • how to parse pdf document in iphone or ipad?

    - by JohnWhite
    I have implemented pdf parsing application for ipad.But content of pdf display not clearly.I dont know why this happning.can you help me for this problem. Edit: I have used below code for pdf parsing.But text and image is not clear. (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code /* Demo PDF printed directly from Wikipedia without permission; all content (c) respective owners */ CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("2008_11_mp.pdf"), NULL, NULL); pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CFRelease(pdfURL); counts+=1; self.pageNumber = counts; self.backgroundColor = nil; self.opaque = NO; self.userInteractionEnabled = NO; } return self; } (void)drawRect:(CGRect)rect { // Drawing code CGContextRef context = UIGraphicsGetCurrentContext(); // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(pdf, pageNumber); CGContextTranslateCTM(context, 0, self.bounds.size.height); CGContextScaleCTM(context, 1, -1); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFMediaBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context); } Please help me solve this problem

    Read the article

  • Positioning SVG Elements

    - by Rob Wilkerson
    In the course of toying with SVG for the first time (using the Raphael library), I've run into a problem positioning dynamic elements on the canvas in such a way that they're completely contained within the canvas. What I'm trying to do is randomly position n words/short phrases. Since the text is variable, its position needs to be variable as well so what I'm doing is: Initially creating the text at point 0,0 with no opacity. Checking the width of the drawn text element using text.getBBox().width. Setting a new x coordinate as Math.random() * (canvas_width - ( text_width/2 ) - pad). Altering the x coordinate of the text to the newly set value (text.attr( 'x', x ) ). Setting the opacity attribute of the text to 1. I'll be the first to admit that my math acumen is limited, but this seems pretty straightforward. Somehow, I still end up with text running off beyond the right edge of my canvas. For simplicity above, I removed the bit that also sets a minimum x value by adding it to the Math.random() result. It is there, though, and I see the same problem on the leading edge of the canvas. My understanding (such as it is), is that the Math.random() bits would generate a number between 0 and 1 which could then be multiplied by some number (in my case, the canvas width - half of the text width - some arbitrary padding) to get the outer bound. I'm dividing the width of the text in half because its position on the grid is set at its center. I hope I've just been staring at this for too long, but is my math that rusty or am I misunderstanding something about the behavior of Math.random(), SVG, text or anything else that's under the hood of this solution?

    Read the article

  • Cocos2D: Problem Rotating a CCMenu

    - by srikanth rongali
    If I try to do actions over menuItems but the actions are not running as expected. I think code below should make the menuItem rotate by 90 degrees but when I run it, the menuItem translates from its coordinates to another coordinate then returns to its original coordinate. The complete translation takes 3 seconds. What I need is for the menuItem to rotate by 90 degrees in place within a 3 seconds duration. Please explain where I have done wrong? CCMenuItemImage *targetE;//Globally declared CCMenu *menu;//Globally declared -(id)init { if( (self = [super init]) ) { isTouchEnabled = YES; CGSize windowSize = [[CCDirector sharedDirector] winSize]; targetE = [CCMenuItemImage itemFromNormalImage:@"grossinis_sister1.png" selectedImage:@"grossinis_sister1.png" target:self selector:@selector(touch:)]; menu = [CCMenu menuWithItems:targetE,nil]; id action4 = [CCRotateBy actionWithDuration:3.0 angle:90]; [menu runAction: [CCSequence actions: action4, nil]]; menu.position = ccp(windowSize.width/2 + 200, windowSize.height/2); [self addChild: menu z:10]; } return self; } @end Thank You.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >