Search Results

Search found 5654 results on 227 pages for '3d rendering'.

Page 115/227 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Anti-aliasing works for debug runtime but not retail runtime

    - by DeadMG
    I'm experimenting with setting various graphical settings in my Direct3D9 application, and I'm currently facing a curious problem with anti-aliasing. When running under the debug runtime, AA works as expected, and I don't have any errors or warnings. But when running under the retail runtime, the image isn't anti-aliased at all. I don't get any errors, the device creates and executes just fine. As I honestly have little idea where the problem is, I will simply give a relatively high-level overview of the architecture involved, rather than specific problematic code. Simply put, I render my 3D content to a texture, which I then render to the back buffer. Any suggestions as to where to look?

    Read the article

  • Shader inputs in a general purpose engine

    - by dreta
    I'm not familiar with SDKs like Unity or UDK that much, so i can't check this offhand. Do general purpose engines allow users to create custom uniform variables? The way i see it, and the way i have implemented it in an engine i'm writing to learn 3D, is that there is a "set" of uniforms provided by the engine and if you want to write a custom shader then you utilize uniforms you need to create a wanted effect. Now, the thing is, first of all i'm not an artist, second of all, i didn't have a chance to create complex scenes yet. So my question is, is it common practice to define variables that the engine provides and only allow the user to work with what they're given? Allowing users to add custom programs and use them where they want is not hard, but i have issues imagining how you'd go about doing the same for uniforms.

    Read the article

  • Portal Ported to a Graphing Calculator

    - by Jason Fitzpatrick
    It’s not exactly a 3D-rendered GPU-burner, but this calculator-based version of Portal still features the same portal-jumping tricks that delighted players in the original game. Built using Axe Parser, an advanced programming language for graphing calculators, Portal: Prelude is part an experiment in pushing the limits of Axe Parser and part long standing tradition of porting popular video games to graphing calculators. You can read more about Axe Parser and the many games and program projects under development using it here. [via Geeks Are Sexy] How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • How does key-based caching work?

    - by Dominic Santos
    I recently read an article on the 37Signals blog and I'm left wondering how it is that they get the cache key. It's all well and good having a cache key that includes the object's timestamp (this means that when you update the object the cache will be invalidated); but how do you then use the cache key in a template without causing a DB hit for the very object that you are trying to fetch from the cache. Specifically, how does this affect One to Many relations where you are rendering a Post's Comments for example. Example in Django: {% for comment in post.comments.all %} {% cache comment.pk comment.modified %} <p>{{ post.body }}</p> {% endcache %} {% endfor %} Is caching in Rails different to just requests to memcached for example (I know that they convert your cache key to something different). Do they also cache the cache key?

    Read the article

  • Questions about an Engine Java

    - by CJ Sculti
    so I am going to start developing Java games (3D) but I have a few questions. So I dont know if I should use an engine or make my own. I feel like I am "cheating" if I use an engine to make my game. Is it frowned upon in the game developing world? What are some advantages and disadvantages to using an engine for my game and is it really that much harder to make my own engine? I know that engines have built in models and textures with easy drag and drop interfaces, would I have any of that if I were to code my own engine? Thanks guys.

    Read the article

  • How to break terrain (in blender) into Chunks for a game engine

    - by Red
    I've created an island in blender. 2048x2048 blender units. The engine developer wants me to split the terrain into 128x128 "chunks" so that would be 16x16 "chunks" from a top down view. The engine isn't using height maps, in order to allow caves, 3d overhangs, etc. I'm not in charge of the engine, so suggestions about that aren't needed here :/ I just need a good, seamless way to split a terrain mesh into 16x16 pieces without leaving holes. I'm very new to blender, so baby steps are very much welcomed :) Here's a quick render of what i've got so far. I added a plane to show sea level but i've since removed it. http://i.imgur.com/qTsoC.png

    Read the article

  • How to render 2D particles as fluid?

    - by luke
    Suppose you have a nice way to move your 2D particles in order to simulate a fluid (like water). Any ideas on how to render it? This is for a 2D game, where the perspective is from the side, like this. The water will be contained in boxes that can be broken in order to let it fall down and interact with other objects. The simplest way that comes to my mind is to use a small image for each particle. I am interested in hearing more ways of rendering water.

    Read the article

  • Alpha blending without depth writing

    - by teodron
    A recurring problem I get is this one: given two different billboard sets with alpha textures intended to create particle special effects (such as point lights and smoke puffs), rendering them correctly is tedious. The issue arising in this scenario is that there's no way to use depth writing and make certain billboards obey depth information as they appear in front of others that are clearly closer to the camera. I've described the problem on the Ogre forums several times without any suggestions being given (since the application I'm writing uses their engine). What could be done then? sort all individual billboards from different billboard sets to avoid writing the depth and still have nice alpha blended results? If yes, please do point out some resources to start with in the frames of the aforementioned Ogre engine. Any other suggestions are welcome!

    Read the article

  • Tour the Cosmos with 100,000 Stars

    - by Jason Fitzpatrick
    The newest Google Chrome Experiment, 100,000 Stars, combines web technologies to serve up a 3D star map you can manually zoom about or sit back and enjoy a star tour. From the automated tour that explores the Milky Way with an ever increasing scale to manually moving about the cloud of stars using the zoom and pan feature, the interactive map makes it easy to explore the 100,000 closest stars to our Sun in style. Hit up the link to take it for a spin. 100,000 Stars [Google Chrome Experiments] Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary

    Read the article

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Physics engine that can handle multiple attractors?

    - by brice
    I'm putting together a game that will be played mostly with three dimensional gravity. By that I mean multiple planets/stars/moons behaving realistically, and path plotting and path prediction in the gravity field. I have looked at a variety of physics engines, such as Bullet, tokamak or Newton, but none of them seem to be suitable, as I'd essentially have to re-write the gravity engine in their framework. Do you know of a physics engine that is capable of dealing with multiple bodies all attracted to one another? I don't need scenegraph management, or rendering, just core physics. (collision detection would be a bonus, as would rigid body dynamics). My background is in physics, so I would be able to write an engine that uses Verlet integration or RK4 (or even Euler integration, if I had to) but I'd much rather adapt an off the shelf solution. [edit]: There are some great resources for physics simulation of n-body problems online, and on stackoverflow

    Read the article

  • How do I render my own DirectX Stuff to a full screen WPF's DirectX surface?

    - by marc40000
    Basically Danny Varod seems to know as he posted it as an answer to this question: Display a Message Box over a Full Screen DirectX application I think, theoretically this might work, but I have no idea how to actually do it. Since I'm also not allowed to post a comment under his comment nor am I allwoed to ask on meta about how to contact another user, I ask this as a normal question here: How do I render my own DirectX Stuff to a full screen WPF's DirectX surface? For starters, I have no idea how to get the DirectX surface from a WPF window. If I had it, what do I have to take care of that the WPF rendering doesn't screw up my own rending or vice-versa?

    Read the article

  • Nvidia G96 [GeForce 9400 GT] and application graphic issues

    - by Fabio
    I've got a quite old NVIDIA graphic card and I with installed restricted drivers from Settings panel (as also shown in this thread). ? ~ lspci 02:00.0 VGA compatible controller: NVIDIA Corporation G96 [GeForce 9400 GT] (rev a1) I tried a lot of them: version 173-update, current, beta, but the only one that can run unity-2d it's current-update. That's Ubuntu 12.04.1 LTS 64bit. However... Unity crashes, sometimes windows border disappear, Java Virtual Machine doesn't works, font rendering it's slow and so on. How can I solve this? Some suggestions? Thanks!

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • Magento CSS Merge breaks layout in IE browsers

    - by Subi
    I am developing a magento website, and it is using CSS merge option. currently in IE the CSS not rendering properly. When I remove some part of CSS file its working. Some times it works when I remove 50 line from top. Some times it works when I remove 100 lines from bottom. So it's nothing related to the CSS I wrote. Merged file contains about 6000 lines and having 380 KB file size. can anybody help me on this ? Thanks

    Read the article

  • Using model tools as map editor

    - by cooky451
    I want to make a game which would require a 3D map editor. Of course, I would like to avoid creating such an editor. My idea is now to use modeling tools (3DS Max, Maya, Blender) to create the map, and to give game specific objects specified names. This way I'd just need to write an COLLADA - native map format converter. But I'm not sure if this is possible the way I imagine it, that's why I'd like to hear your thoughts on the matter. Are modeling tools suitable to create big open world maps? Can this "naming convention"-idea for game specific objects work? Are the modeling tools able to export a scene in chunks / in a way that occlusion culling and collision detection can be properly done? If not: Is there a way to build a suitable data structure from the exported data?

    Read the article

  • How to autohide Unity 2D?

    - by ph1b
    In the regular Unity 3D, I used Compiz Config Manager where I could config the Unity-Plugin to autohide the left start-bar and not to show it when moving my mouse to the left. So it was perfect for using Docky. But with Unity 2D the Unity settings don't have any effect. How can I completely hide the start-panel and just have it shown when pressing the windows-key? With the following: dconf write /com/canonical/unity-2d/launcher/hide-mode 1 dconf write /com/canonical/unity-2d/launcher/use-strut false it still opens the bar when going with my mouse to the left.

    Read the article

  • My Silverlight 4 talk at ConFoo Montreal

    Wednesday I did a Silverlight 4 talk at ConFoo (www.confoo.ca) very good conference in Montreal! As my public was mostly PHP dev I started by quickly introducing Silverlight, then showed several demos: Easy Painter: http://nokola.com/easypainter/ Physics Games: http://www.spritehand.com/ 3D: http://www.ingebrigtsen.info/silverlight/Balder/20100208/TestPage.html Bouncing Pane: http://weblogs.asp.net/lduveau/archive/2009/04/18/silverlight-3-the-amazing-bouncing-plane-demo.aspx ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dynamic Tab Implementation in ADF

    - by Vijay Mohan
    Well, this can be a common usecase across apps to open tabs dynamically at runtime based on the request.Well, in order to achieve this you can have a parent container, lets say a panelTab component.Inside panelTab , u can have a showDetailItem inside an af:foreach or an af:iterator binded to a bean static list which will have as many show detail items as you wish to be shown.something like this.private static List = { new showDetailItem("1"),new ShowDetailItem("2") ...};now in the backing bean you can have a method that takes care of rendering and disclosing an specific tab based on the index.public void openMyTab(){List<MyItems> list = refToParentContainer.getChildren();int indexOfTabToBeOpened = //Write a method that will compute the tab index of the next //tab.list.get(index).setRendered(true);list.get(index).setDisclosed(true);similarly you can set other properties too.}Else, instead of having af:foreach/iterator iterating through the SD items , you can go for static SDs in the page with render property set to false and then you can follow the same approach to render/disclose it at runtime.

    Read the article

  • Creating natural environments that can run on lower end computers in Unity3D/C#

    - by Timothy Williams
    So, I'm starting work on a project soon that will require me to create realistic environments that can preferably run on PC's besides high quality ones. The goal is to get as real an environment as possible while still being easy(ish) to run. The only problem is I've NEVER done anything with 3D environments, making trees sway, grass move, lighting, etc. Can anyone give me any help? Perhaps describe how it's done? Link me to articles? I'm just looking to be pointed in the right direction, not for you to write the code for me. Any help at all would be greatly appreciated, I'm using Unity3D and C# as my language. Thanks, Tim.

    Read the article

  • Trouble with UV Mapping Blender => Unity 3

    - by Lea Hayes
    For some reason I am getting nasty grey edges around the edges of rendered 3D models that are not present in Blender. I seem to be able to solve the problem by reducing the size of the UV coordinates within the part of the texture that is to be mapped. But this means that: I am wasting valuable texture space Loss of accuracy in drawn UV maps Could I be doing something wrong, perhaps a setting in Unity that needs changing? I have watched countless tutorials which demonstrate Blender default generated UV coordinates with "Texture Paint" which are perfectly aligned in Unity. Here is an illustration of the problem: Left: approximately 15 pixels of margin on each side of UV coordinates Right: Approximately 3 pixels of margin on each side of UV coordinates Note: Texture image resolution is 1024x1024

    Read the article

  • Poor mobile performance when running from Eclipse

    - by Yajirobe_LOL
    So after weeks of thinking my rendering code was bad, I accidentally discovered the following: Running my game on a Nexus S From Eclipse (Debug as - Android application): 12fps From the device while still attached to USB (getting log info in Eclipse still): 24fps From the device while not attached via USB: 56fps I was wondering if anyone else has issues like this? I mean, the problem really isn't a problem since the final release build will likely have good performance, but for the time being I don't want to have to keep (un)plugging my device in and out when testing code all day long. Is there some remedy for this or does anyone have any input/advice? Thanks.

    Read the article

  • Setting effects variables in XNA

    - by Badescu Alexandru
    Hello ! I am currently reading a book named "3D Graphics with XNA Game Studio 4.0" by Sean James and have some questions to ask : If i create a effect parameter named lets say SpecularPower and have in my effect a variable named SpecularPower , if i do something like effect.Parameters["SpecularPower"].SetValue(3) That wil change the SpecularPower variable in my effect ? And a second question, not regarding the book : If i have a spaceship and i've created a "boost" functionality that speeds up my spaceship, what effects should i implement to create the impresion oh high speed ? I was thinking of making everything except my spaceship blurry but i think there would be something missing . Any ideas ? Regards, Alex Badescu

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >