Search Results

Search found 5654 results on 227 pages for '3d rendering'.

Page 54/227 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Android java assigning 2d array to 3d array

    - by semajhan
    I'm running into problems trying to assign a 2d array to a 3d array, so I thought i'd ask a question about 3d and 2d arrays. Say I have a masterArray[][][] and wanted to put childArray1[][] and childArray2[][] into it. This is how I have done it and was wondering if that is the correct way of applying it: private int[][][] masterArray; private int[][] childArray1 = { {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 0, 0, 1, 0, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 0, 1, 1, 1, 1, 1, 1, 0, 1}, {1, 0, 1, 1, 1, 1, 0, 1, 0, 1}, {1, 0, 1, 1, 0, 1, 0, 1, 0, 1}, {1, 0, 1, 1, 0, 1, 1, 1, 0, 1}, {1, 0, 1, 1, 1, 1, 1, 1, 0, 1}, {1, 0, 1, 1, 0, 1, 1, 1, 0, 1}, {1, 0, 1, 1, 0, 1, 8, 1, 0, 1}, {1, 0, 7, 1, 1, 1, 0, 1, 0, 1}, {1, 0, 1, 1, 1, 1, 0, 1, 0, 1}, {1, 0, 1, 1, 1, 1, 1, 1, 0, 1}, {1, 0, 0, 0, 0, 0, 0, 9, 0, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1} }; private int[][] childArray2 = { {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 1, 1}, {1, 1, 1, 1, 7, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 0, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 0, 0, 0, 1, 1, 1}, {1, 1, 1, 9, 1, 1, 8, 0, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, }; Ok, so in my init method I use these some methods to set the child arrays into the master array. What I was curious about was how this exactly works. I assumed the following: masterLevel = new int[MAX_LEVELS][MAP_WIDTH][MAP_HEIGHT]; for (int x = 0; x < MAP_WIDTH; x++) { for (int y = 0; y < MAP_HEIGHT; y++) { masterArray[currentLevel][x][y] = childArray1[x][y]; } } Would that work? In my application things aren't working so I picking out code that I am not 100% sure on.

    Read the article

  • DirectX: Render to a screen buffer without using a render target

    - by knight666
    Hello, I'm writing an open source 2D game engine, and I want to support as many devices and platforms as possible. I currently only have Windows Mobile though. I'm rendering using DirectX Mobile, with DirectDraw as a fallback path. However, I've run into a bit of trouble. It seems that while the reference driver supports createRenderTarget, many many many physical devices do not. I need some way to render to the screen without using a render target, because I render sprites using textured quads, but I also need to be able to draw individual pixels. This is how I do it right now: // save old values if (Error::Failed(m_D3DDevice->GetRenderTarget(&m_D3DOldTarget))) { ERROR_EXPLAIN("Could not retrieve backbuffer."); return false; } // clear render surface if (Error::Failed(m_D3DDevice->SetRenderTarget(m_D3DRenderSurface, NULL))) { ERROR_EXPLAIN("Could not set render target to render texture."); return false; } if (Error::Failed (m_D3DDevice->Clear( 0, NULL, // target rectangle D3DMCLEAR_TARGET, D3DMCOLOR_XRGB(0, 0, 0), // clear color 1.0f, 0 ) ) ) { ERROR_EXPLAIN("Failed to clear render texture."); return false; } D3DMLOCKED_RECT render_rect; if (Error::Failed(m_D3DRenderSurface->LockRect(&render_rect, NULL, NULL))) { ERROR_EXPLAIN("Failed to lock render surface pixels."); } else { m_D3DBackSurf->SetBuffer((Pixel*)render_rect.pBits); m_D3DRenderSurface->UnlockRect(); } // begin scene if (Error::Failed(m_D3DDevice->BeginScene())) { ERROR_EXPLAIN("Failed to start rendering."); return false; } // ===================== // example rendering // ===================== // some other stuff, but the most important part of rendering a sprite: device->SetTexture(0, m_Texture)); device->SetStreamSource(0, m_VertexBuffer, sizeof(Vertex)); device->DrawPrimitive(D3DMPT_TRIANGLELIST, 0, 2); // plotting a pixel Surface* target = (Surface*)Device::GetRenderMethod()->GetRenderTarget(); buffer = target->GetBuffer(); buffer[somepixel] = MAKECOLOR(255, 0, 0); // end scene if (Error::Failed(device->EndScene())) { ERROR_EXPLAIN("Failed to end scene."); return false; } // clear screen if (Error::Failed(device->SetRenderTarget(m_D3DOldTarget, NULL))) { ERROR_EXPLAIN("Couldn't set render target to backbuffer."); return false; } if (Error::Failed(device->GetBackBuffer ( 0, D3DMBACKBUFFER_TYPE_MONO, &m_D3DBack ) ) ) { ERROR_EXPLAIN("Couldn't retrieve backbuffer."); return false; } RECT dest = { 0, 0, Device::GetWidth(), Device::GetHeight() }; if (Error::Failed( device->StretchRect ( m_D3DRenderSurface, NULL, m_D3DBack, &dest, D3DMTEXF_NONE ) ) ) { ERROR_EXPLAIN("Failed to stretch render texture to backbuffer."); return false; } if (Error::Failed(device->Present(NULL, NULL, NULL, NULL))) { ERROR_EXPLAIN("Failed to present device."); return false; } I'm looking for a way to do the same thing (render sprites using hardware acceleration and plot pixels on a buffer) without using a render target. Thanks in advance.

    Read the article

  • Interpolation of scattered data: What could I do?

    - by Simon
    Hi! I need your help. I'm working on a 3D chart in Java using Java 3D. It should be able to display a bunch of measured values. As measured, the data I get is scattered. This means I will have to interpolate the missing points in order to get my surface plotted nicely. I didn't study all that 3D-Geometry stuff yet and I don't know where to start. My idea is to triangulate the points to a surface and then, based on the triangulation, interpolate the missing points. (see this to have a rough idea of what I want to achieve) Does someone have experiences with the interpolation of scattered data? Is my approach the right one? If yes, what kind of data structures and algorithms will I need in order to triangulate my points cloud?

    Read the article

  • Unity3D, Torque3D, Google O3D, WebGl....which to choose?

    - by gpwjg
    for development of interactive 3d web applications, which engine is recommended? I am aware that WebGL has been anounced to become standarized for all browsers in the near future (1~2 years). I am afraid that by investing time into a proprietary game engine such as Unity, torque would be not great once plugin-less open source 3d engines appear (webgl & 3D javascript). Is this a stupid thing to worry about ? Should I begin with Unity3D (it's demo's and tools were mind blowing).

    Read the article

  • Problems with word completion on Windows Mobile

    - by Rowland Shaw
    For "some reason" the word completion function on my windows mobile phone (HTC Diamond, rebadged as a T-Mobile MDA Compact IV (UK) running WM6.1 with HTC Touch Flo 3D) hasn't worked since one of my firends was taking a look at the phone (I remember him bitching about it being too obtrusive for him, as an iPhone fanboy). I've checked all the obvious settings ( Start Input Word Completion ) and everything looks set there; I tried a hard reset, to no avail and even tried upgrading the ROM t the latest from my network provider. I even tried walking into the store where I bought the phone, and the staff couldn't fix the issue. I still have my old handset, which also runs WM6.1 (a T-Mobile MDA Compact III (UK), albeit without Touch Flo 3D), and the word completion works on there, so I'm a little confused as to why I can't get it to work again on my new handset. Can anybody identify why this might not be working, or help me fix it? Edit: Even "Touch Input Settings" has both "Word Completion in T9 mode" and "Word Completion in ABC mode" checked. The full qwerty keyboard option is in T9 mode, and word completion works for this input method; It still does not work for my preferred, "Letter Recogniser" method.

    Read the article

  • Is it possible to run C++ binded with SDL+OpenGL code on a web browser?

    - by unknownthreat
    My client wants her website to have an application that renders 3D (light 3D stuff, we are drawing only flat squares in 3D world) but web programming is not my thing. So I am looking for something that can run a C++ program from a web browser. But I think, if this is the case, then the client side must download the program first, and that's not what I want. The client should only be able to use this application only on the website. I came across Google Native Client, which claims that it can run x86 native code in web applications. I haven't decide whether it is worth it or not and I don't know whether this is what I want or not, so I decided to ask experienced people about this. If I want to have something like this, is what I said above possible? Or I completely need other languages like Flex because it does not worth the trouble? Or is Google Native Client suitable for doing something like this?

    Read the article

  • Rendering a random generated maze in WinForms.NET

    - by Claus Jørgensen
    Hi I'm trying to create a maze-generator, and for this I have implemented the Randomized Prim's Algorithm in C#. However, the result of the generation is invalid. I can't figure out if it's my rendering, or the implementation that's invalid. So for starters, I'd like to have someone take a look at the implementation: maze is a matrix of cells. var cell = maze[0, 0]; cell.Connected = true; var walls = new HashSet<MazeWall>(cell.Walls); while (walls.Count > 0) { var randomWall = walls.GetRandom(); var randomCell = randomWall.A.Connected ? randomWall.B : randomWall.A; if (!randomCell.Connected) { randomWall.IsPassage = true; randomCell.Connected = true; foreach (var wall in randomCell.Walls) walls.Add(wall); } walls.Remove(randomWall); } Here's a example on the rendered result: Edit Ok, lets have a look at the rendering part then: private void MazePanel_Paint(object sender, PaintEventArgs e) { int size = 20; int cellSize = 10; MazeCell[,] maze = RandomizedPrimsGenerator.Generate(size); mazePanel.Size = new Size( size * cellSize + 1, size * cellSize + 1 ); e.Graphics.DrawRectangle(Pens.Blue, 0, 0, size * cellSize, size * cellSize ); for (int y = 0; y < size; y++) for (int x = 0; x < size; x++) { foreach(var wall in maze[x, y].Walls.Where(w => !w.IsPassage)) { if (wall.Direction == MazeWallOrientation.Horisontal) { e.Graphics.DrawLine(Pens.Blue, x * cellSize, y * cellSize, x * cellSize + cellSize, y * cellSize ); } else { e.Graphics.DrawLine(Pens.Blue, x * cellSize, y * cellSize, x * cellSize, y * cellSize + cellSize ); } } } } And I guess, to understand this we need to see the MazeCell and MazeWall class: namespace MazeGenerator.Maze { class MazeCell { public int Column { get; set; } public int Row { get; set; } public bool Connected { get; set; } private List<MazeWall> walls = new List<MazeWall>(); public List<MazeWall> Walls { get { return walls; } set { walls = value; } } public MazeCell() { this.Connected = false; } public void AddWall(MazeCell b) { walls.Add(new MazeWall(this, b)); } } enum MazeWallOrientation { Horisontal, Vertical, Undefined } class MazeWall : IEquatable<MazeWall> { public IEnumerable<MazeCell> Cells { get { yield return CellA; yield return CellB; } } public MazeCell CellA { get; set; } public MazeCell CellB { get; set; } public bool IsPassage { get; set; } public MazeWallOrientation Direction { get { if (CellA.Column == CellB.Column) { return MazeWallOrientation.Horisontal; } else if (CellA.Row == CellB.Row) { return MazeWallOrientation.Vertical; } else { return MazeWallOrientation.Undefined; } } } public MazeWall(MazeCell a, MazeCell b) { this.CellA = a; this.CellB = b; a.Walls.Add(this); b.Walls.Add(this); IsPassage = false; } #region IEquatable<MazeWall> Members public bool Equals(MazeWall other) { return (this.CellA == other.CellA) && (this.CellB == other.CellB); } #endregion } }

    Read the article

  • Wrapping/warping a CALayer/UIView (or OpenGL) in 3D (iPhone)

    - by jbrennan
    I've got a UIView (and thus a CALayer) which I'm trying to warp or bend slightly in 3D space. That is, imagine my UIView is a flat label which I want to partially wrap around a beer bottle (not 360 degrees around, just on one "side"). I figured this would be possible by applying a transform to the view's layer, but as far as I can tell, this transform is limited to rotation, scale and translation of the layer uniformly. I could be wrong here, as my linear algebra is foggy at this point, to say the least. How can I achieve this?

    Read the article

  • PhantomJS not exactly rendering HTML to PNG

    - by John Leonard
    I'm having trouble adjusting PhantomJS to create a PNG file that matches the original browser presentation. Here is the entire sample html file. It's a sankey diagram creating using rCharts and d3-sankey. (You'll need to save the file to your hard drive and view it from there.) I'm running on Windows and using rasterize.js: >> phantomjs.exe rasterize.js test.html test.png ISSUE: Below is a snip of one of the text strings when viewed in a browser: And here is a snip of the same string from the PNG created by PhantomJS: How do I make the text-shadow go away? I've played around with various CSS attributes (text-shadow) and webkit-specific attributes (e.g., -webkit-text-rendering), but can't seem to make it go away. Is this a setting in PhantomJS? in the underlying webkit? or somewhere else? Many thanks!

    Read the article

  • How do i translate movement on the Canvas3D to movement in the virtual 3D world

    - by Coder
    My goal is to move a shape in the virtual world in such a way so that it ends up where the mouse pointer is on the canvas. What i have: -mouse position (x,y) on a Canvas3D object -Point3d object of where a pick ray starting from the Canvas3D viewport intersects with the first scene object. (point in 3D space of where i want to start the drag) What i want: -Some way to translate the Point3d's coordinates so that the initial point of intersection (the Point3d object) is always overlapping the the mouse position on the canvas (same as when i used the pick ray to determine what the user clicked on from the Canvas3D object). Thanks!

    Read the article

  • How do I draw a texture-mapped triangle in MATLAB?

    - by Petter
    I have a triangle in (u,v) coordinates in an image. I would like to draw this triangle at 3D coordinates (X,Y,Z) texture-mapped with the triangle in the image. Here, u,v,X,Y,Z are all vectors with three elements representing the three corners of the triangle. I have a very ugly, slow and unsatisfactory solution in which I (1) extract a rectangular part of the image, (2) transform it to 3D space with the transformation defined by the three points, (3) draw it with surface, and (4) finally masking out everything that is not part of the triangle with AlphaData. Surely there must be an easier way of doing this?

    Read the article

  • Intersections of 3D polygons in python

    - by Andrew Walker
    Are there any open source tools or libraries (ideally in python) that are available for performing lots of intersections with 3D geometry read from an ESRI shapefile? Most of the tests will be simple line segments vs polygons. I've looked into OGR 1.7.1 / GEOS 3.2.0, and whilst it loads the data correctly, the resulting intersections aren't correct, and most of the other tools available seem to build on this work. Whilst CGAL would have been an alternative, it's license isn't suitable. The Boost generic geometry library looks fantastic, but the api is huge, and doesn't seem to support wkt or wkb readers out of the box.

    Read the article

  • bitmap button not displaying in 3D style

    - by Rohit Sasikumar
    Hi, I want to display on my dialog, a bitmap button. I am using the below code CImage image; hr = image.Load(_T("myimage.png")); // just change extension to load jpg bitmap.Attach(image.Detach()); m_button.ModifyStyle(0,BS_BITMAP); m_button.SetBitmap(bitmap); This way bitmap is correctly displayed on button, but the button is not displayed in 3D style as normal buttons would look. I have set owner drawn property to false, still it displaying like this. Any ideas as to what could be wrong? Thanks, Rohit

    Read the article

  • PyOpenGL: glVertexPointer() offset problem

    - by SurvivalMachine
    My vertices are interleaved in a numpy array (dtype = float32) like this: ... tu, tv, nx, ny, nz, vx, vy, vz, ... When rendering, I'm calling gl*Pointer() like this (I have enabled the arrays before): stride = (2 + 3 + 3) * 4 glTexCoordPointer( 2, GL_FLOAT, stride, self.vertArray ) glNormalPointer( GL_FLOAT, stride, self.vertArray + 2 ) glVertexPointer( 3, GL_FLOAT, stride, self.vertArray + 5 ) glDrawElements( GL_TRIANGLES, len( self.indices ), GL_UNSIGNED_SHORT, self.indices ) The result is that nothing renders. However, if I organize my array so that the vertex position is the first element ( ... vx, vy, vz, tu, tv, nx, ny, nz, ... ) I get correct positions for vertices while rendering but texture coords and normals aren't rendered correctly. This leads me to believe that I'm not setting the pointer offset right. How should I set it? I'm using almost the exact same code in my other app in C++ and it works.

    Read the article

  • WPF 4.0 Font Rendering Issue

    - by Tom Allen
    I'm getting a weird rendering issue with WPF 4 applications in the way they render some of the text as it's stretching it and making it very narrow. .net 3.5: .net 4.0: At first I thought it could be a problem with the font, but I'm also seeing the same problem in the Blend 4 beta: I'm running XP SP3, Visual Studio 2010 Professional and everything's as up to date as it can be. I'm not noticing any such problems with Silverlight 4 apps I have built on the same machine... Anyone else seen this or know why it's happening?

    Read the article

  • Multilangual Unicode rendering in opengl

    - by sum1stolemyname
    Hi Folks, I have to extend an OpenGL-Rendering System to support international characters (especially Hebrew, Arabic and cyrillic). Development Platform is Windows(XP|Vista|7), Alas using Embercardero Delphi 2010. I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings. While this is feasable for Latin-1 Characters, building the full unicode character set in advanced is pretty time-consuming (about 8.5 minutes on my machine), so i am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (latin, greek, cyrillic, arbaic and hebrew) to include just the glyphs i need, but that would just be a solution for my current needs, and will become insufficent once other encoding is needed. On the upside, i do not have to worry about left-to right or right-to left direction as our application can handle this already. I would expect this to be a well-known problem, so i would like to ask if there is any reference material on this on the web, or if you could share some insight on this?

    Read the article

  • Resources for Programmatic Rendering of Topology Maps

    - by bn
    Servus, Do you know of any frameworks, APIS, languages, or other resources that are well suited for drawing topology maps that allow a user to interact with objects on the map? I am not constrained by language choice and the program can be web-based, or stand-alone. I thought I would check before rolling my own. My goal is not to draw cartographic maps, but more like this picture: http://www.fineconnection.com/files/images/GraphicalNM.PNG, or if you are familiar with Edward Tufte's books, the data-visualization mechanisms he describes such as a map of a metro or subway. Also, if you have had any experience rendering these types of user interfaces or usage of underlying datastructures, I would be grateful to hear any thoughts you have on the subject, advice, any "gotchas." Thank you very for your time, -bn

    Read the article

  • IE6 not rendering the page

    - by JX
    I have a web page which consists of a few component pages. When a user requests the page, the server assembles its component pages into the main page and sends it back to the browser. When there is an update on the component page, IE6 browser cannot see the updated content. However, IE7 and Firefox are fine. Checking with Fiddler, the http response returns 200, If-Modified-Since header is the same as the last modified date of the main page, the response raw html also has the updated content. It seems that IE6 browser abandons the rendering of the page because If-Modified-Since is the same as the last modified date of the main page. Does anyone know how to overcome this issue in IE6?

    Read the article

  • Optimising local image loading/rendering on iPhone

    - by Tricky
    Hi, I'm looking to create an interface where the user can navigate through large volumes of images. Each image has a thumbnail of 128x128 that I wish to display and will be kind of similar to coverflow in operation. I have this all working in principle but am becoming stuck when navigating through content at speed. The interface begins to stutter and becoming jerky. I believe this is primarily because of disk i/o and the cost of rendering each image. Is there anyway this can be handed over to a seperate thread simply? Defaulting to a greyed out thumbnail until the image has loaded? How have Apple managed to achieve this in coverflow? Many thanks,

    Read the article

  • ListView control rendering issue with Groups, CheckBoxes and View mode SmallIcon

    - by volody
    Microsoft MSDN site has next remark: "Any groups assigned to a ListView control appear whenever the ListView.View property is set to a value other than View.List." My problem is that i like to have View set to SmallIcon. In this mode ListView control is shifted left, and CheckBoxes are covered by left edge How to solve this issue, or at least how is possible to shift rendering of control to the right. My OS is Windows XP Service Pack 3. It looks like that ListView items with Groups and CheckBoxes shows correctly only when View set to Details.

    Read the article

  • Can't get custom error rendering to work in symfony 1.4

    - by hongkildong
    I'm tring to customize error rendering in my form according to this example. Here is my code: if ($this['message']->hasError()) { $error_msg = '<ul>'; foreach ($this['message']->getError() as $error) $error_msg .= "<li>$error</li>"; $error_msg .= '</ul>'; } return $error_msg; but when $this['message'] has error this code returns '<ul></ul>' so it seems foreach ($this['message']->getError() as $error) causes no iterations $this['message']->getError() returns sfValidatorError object - maybe something changed in symfony 1.4 and it isn't iterable anymore... At first I thought that all magic in that example happened because of object being placed in $error by iteration implements __toString() but it seems no iterations happens at all...

    Read the article

  • Draw a textured triangle (patch) in Matlab

    - by Petter
    I have a triangle in (u,v) coordinates in an image. I would like to draw this triangle at 3D coordinates (X,Y,Z) texture-mapped with the triangle in the image. Here, u,v,X,Y,Z are all vectors with three elements representing the three corners of the triangle. I have a very ugly, slow and insatisfactory solution in which I (1) extract a rectangular part of the image, (2) transform it to 3D space with the transformation defined by the three points, (3) draw it with surface, and (4) finally masking out everything that is not part of the triangle with AlphaData. Surely there must be an easier way of doing this?

    Read the article

  • Using GLOrtho to view Side, Front, Top perspectives of a 3D scene

    - by talldan
    Dear all, I'm building a game level editing app as part of a university project. In my application I have multiple viewports, a Perspective viewport and three orthographic views all setup to view the same scene. I've successfuly setup the orthographic views and can translate and scale them to mimic scrolling and zooming. Unfortunately, I'm having one problem - my scene still contains 3 dimensions, so objects viewed in orthographic mode of certain depths are clipped when they fall outside of my clipping volume. Most 3D authoring tools or level editors allow you to view all objects in orthographic mode regardless of depth. I guess what I need to do is scale my scene in the appropriate dimension so that all values lie between 1 and -1, is there a straightforward way of going about this? Or is there a different better approach. Thanks very much for your help, Dan

    Read the article

  • Weird Rails URL issue when rendering a new action

    - by Tony
    I am rendering a new action but somehow getting the "index" URL. To be more specific, my create action looks like this: class ListingsController < ApplicationController def create @listing = Listing.new(params[:listing]) @listing.user = @current_user if @listing.save redirect_to @listing else flash[:error] = "There were errors" render :action => "new" end end end When there are errors, I get the "new" action but my URL is the index URL - http://domain.com/listings Anyone know why this would happen? My routes file is fairly standard: map.connect 'listings/send_message', :controller => 'listings', :action => 'send_message' map.resources :listings map.root :controller => "listings" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format'

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >