Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 91/101 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Hybrid Graphics Functional won't work with my Asus UL30V anymore

    - by futuress
    The problem is that I am no longer able to boot in compatibility mode for just turning on my Nvidia graphics to install the driver. Because no login screen will appear if Ubuntu is loading. In Ubuntu 11.10 I was able to activate nvidia graphics only' option this way: 1) Change BIOS to 'compatibility mode' which will turn off the Intel card. 2) Install the Nvidia proprietary driver using Ubuntu's driver finder (Additional Drivers) and then reboot. I was not interested using only the Intel graphics, for the sake of battery life. Now I have both cards running and they drain my battery life dramatically. And the main problem of this configuration no OpenGL is available, so I can't play any games any more. At this point, I have a pre-solution. I uninstalled the nvidia drivers and installed bumblebee. Now the Intel card is recognized. I would prefer to run just the nvidia card as in Ubuntu 11.10 but for now this is better than nothing. Does anybody else have the same problem?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Is it a must to focus on one specific IT subject to be succesful?

    - by Ahmet Yildirim
    Lately I'm deeply disturbed by the thought that I'm still not devoted to one specific IT subject after so many years of doing it as a hobby. I've been in so many different IT related hobbies since I was 12. I have spent 8 years and now I'm 20 and just finished freshman year at Computer Eng. Just to summarize the variety: 3D Game Dev. and Modelling (Acknex, Irrlicht , OpenGL, GLES, 3DSMAX) Mobile App.Dev (Symbian, Maemo, Android) Electronis (Arduino) Web.Dev. (PHP, MYSQL, Javascript, Jquery, RaphaelJS, Canvas, Flash etc.) Computer Vision (OpenCV) I need to start making money. But I'm having problem to pick the correct IT business to do so. Is it a problem to have interest in so many different IT subjects?(in business world) I'm having a lot of fun by doing all those stuff from time to time. Other than making money I also noticed that having so many different interests is lowering my productivity. But I'm still having difficulty to pick one. I'm feeling close to all those subjects (time to time).

    Read the article

  • Where to Start?

    - by freemann098
    my name is Chase. I've been programming for over 3 years now and I've made very little progress towards game development. I blame myself for it due to reasons. I have experience in many languages such as C++, C#, and Java. I have a little bit of knowledge in JavaScript/HTML and Python. My question is where to start on actually understanding jumping into game development. Whenever I watch game development tutorials it mostly makes sense until points of things like OpenGL or advanced topics that make no sense at all. An example is something like glOrhho Matrix or whatever. Videos either don't explain things like this or they're not explained very well. Do I not know enough basics? I find myself always copying code from a video but understanding very little of it. It's like i'm memorizing things I don't understand which makes it hard to program at all. If I were to want to get to the point where I could write my own game engine or just a game by myself in general in C++ using at the most documentation how would I start at mastering to that level. Should I learn C first, or get really good at basics in general with C++. I know there is a similar posted question on this site but it's not the same due to the fact the person asking the question has a well knowledge level in programming. I'm stuck in a loop of learning the same things but if I go farther I don't understand. I'm stuck in the same spot and need to make progress.

    Read the article

  • Extracting Frustum Planes (Hartmann & Gribbs method)

    - by DAVco
    I have been grappling with the Hartmann/Gribbs method of extracting the Frustum planes for some time now, with little success. There doesn't appear to be a "definitive" topic or tutorial which combines all the necessary information, so perhaps this can be it First of all, I am attempting to do this in C# (For Playstation Mobile), using OpenGL style Column-Major matrices in a Right-Handed coordinate system but obviously the math will work in any language. My projection matrix has a Near plane at 1.0, Far plane at 1000, FOV of 45.0 and Aspect of 1.7647. I want to get my planes in World-Space, so I build my frustum from the View-Projection Matrix (that's projectionMatrix * viewMatrix). The view Matrix is the inverse of the camera's World-Transform. The problem is; regardless of what I tweak, I can't seem to get a correct frustum. I think that I may be missing something obvious. Focusing on the Near and Far planes for the moment (since they have the most obvious normals when correct), when my camera is positioned looking down the negative z-axis, I get two planes facing in the same direction, rather than opposite directions. If i strafe my camera left and right (while still looking along the z axis) the x value of the normal vector changes. Obviously, something is fundamentally wrong here; I just can't figure out what - maybe someone here can?

    Read the article

  • Thunderbird keeps crashing Ubuntu 12.04 64 bit

    - by maurizio ribera d'alcala'
    I have been using Thunderbird for years under different versions of Ubuntu, including 12.04 since its release. From a couple of days it keeps crashing after a few seconds after start. I tried to reinstall it creating a new profile and copying the old mail, file by file. After one day of normal functioning it started again to crash. Mozilla is receiving the crash reports. Following is the content of the last one: Add-ons: [email protected]:15.0,[email protected]:0.3.11,[email protected]:0.9.3,[email protected]:3.4.1,[email protected]:15.0,[email protected]:0.5.1,{b4447f60-db9c-11da-a94d-0800200c9a66}:0.9.1,{972ce4c6-7e08-4474-a285-3208198ce6fd}:15.0 BuildID: 20120827103657 CrashTime: 1347200254 EMCheckCompatibility: true Email: [email protected] FramePoisonBase: 7ffffffff0dea000 FramePoisonSize: 4096 InstallTime: 1346431480 Notes: OpenGL: Tungsten Graphics, Inc -- Mesa DRI Intel(R) Sandybridge Mobile -- 3.0 Mesa 8.0.2 -- texture_from_pixmap ProductID: {3550f703-e582-4d05-9a08-453d09bdfdc6} ProductName: Thunderbird ReleaseChannel: release SecondsSinceLastCrash: 1109 StartupTime: 1347200242 Theme: classic/1.0 Throttleable: 1 Vendor: Version: 15.0 I started Thunderbird in safe mode and tested all the add-ons. Apparently is the 'Unity Launcher Integration' that creates the problem. I say apparently because I want two wait for two-three days to be sure TB returns to its regular functioning. Is this a bug? can it be solved

    Read the article

  • Who can change the View in MVC?

    - by Luke
    I'm working on a thick client graph displaying and manipulation application. I'm trying to apply the MVC pattern to our 3D visualization component. Here is what I have for the Model, View, and Controller: Model - The graph and it's metadata. This includes vertices, edges, and the attributes of each. It does not contain position information, icons, colors, or anything display related. View - This would commonly be called a scene graph. It includes the 3D display information, texture information, color information, and anything else that is related specifically to the visualization of the model. Controller - The controller takes the view and displays it in a Window using OpenGL (but it could potentially be any 3D graphics package). The application has various "layouts" that change the position of the vertices in the display. For instance, one layout may arrange the vertices in a circle. Is it common for these layouts to access and change the view directly? Should they go through the Controller to access the View? If they go through the Controller, should they just ask for direct access to the View or should each change go through the controller? I realize this is a bit different from the standard MVC example where there a finite number of Views. In this case, the View can change in an infinite number of ways. Perhaps I'm shattering some basic principle of MVC here. Thanks in advance!

    Read the article

  • Ubuntu 64bit Black Screen on Minecraft

    - by Signify
    I have tried posting on the forums, but I really need help (I'm a server admin and really don't want to have to switch to Windows just to run Minecraft). Anyhow, I originally was running openjdk6 as I was told that 7 was unstable and was getting periodical lag spikes while walking (at least once every 3 seconds the screen would freeze for a tenth of a second). After that, I attempted to install Sun's Java JDK7 (I couldn't get ahold of 6 without signing up for Oracle's newsletters). Upon attempting to run Minecraft, I got a black screen after logging in with this error message: 27 achievements 182 recipes Setting user: Thunder7102, -1618112820878091307 Exception in thread "Minecraft main thread" java.lang.UnsatisfiedLinkError: /home/noiro/.minecraft/bin/natives/liblwjgl.so: /home/noiro/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.<clinit>(Sys.java:98) at org.lwjgl.opengl.Display.<clinit>(Display.java:132) at net.minecraft.client.Minecraft.a(SourceFile:184) at net.minecraft.client.Minecraft.run(SourceFile:657) at java.lang.Thread.run(Thread.java:722) Now, this got me fed up, so I tried to install a Windows 7 virtual machine through virtualbox, I gave it 256mb of graphics memory with 2D and 3D acceleration and 3GB of RAM. I installed Java JDK7 for Windows (which does work from experience on my other Windows 7 partition). Once again, a black screen after login. What the heck is going on guys? My System Specs: Ubuntu 12.04 64bit Fully Updated running Gnome3 Nvidia GTS 450 1.3GB OC'd AMD Athlon II 4x 2.8Ghz 6GB of RAM So, what do you think?

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • Clutter for game GUI

    - by tjameson
    I'm pretty new to game development, having only written a simple 3d game for a class project, but I'd like to get started on a bigger project. I'm writing an MMORPG to run in both the browser (WebGL) and natively (OpenGL ES 2). In choosing a GUI toolkit, I'm trying to find a style that work work natively and would be simple to emulate in WebGL. I am considering using D or Go for writing my game, so interfacing with C++ libraries will be difficult, if not impossible. Of course, the language isn't the end goal here, so if using C++ will save considerable time, I'll bite the bullet and use that. In order to reduce the amount of code I'll have to write for the browser, I'm considering using something simple like Clutter for basic abstractions, which I think will be pretty easy to emulate (layered canvases maybe?). Does anyone have experience using Clutter for a 3d game? Note: I haven't used any game development libraries, and I only have limited experience with GUI libraries. I do have HTML+CSS experience, so maybe librocket is a viable solution?

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • About C# objects and the possibilties it has

    - by user527825
    As a novice programmer and I always wonder about c# capabilities. I know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. I think C# is a proprietary language (I don't know if I said that right) meaning you can't do it outside Visual Studio or Windows. Also you can't create your own controller (called object right?) like you are forced to use these available in toolbox and their properties and methods. Can C# be used with OpenGL API or DirectX API Finally it always bothers me when I think I start doing things in Visual Studio, I know it sounds arrogant to say but sometimes I feel that I don't like to be forced to use something even if its helpful, like I feel (do I have the right to feel?) that I want to do all things by myself? Don't laugh I just feel that this will give me a better understanding. Is Visual C# like using MaxScript inside 3ds max in that C# is exclusive to do Windows and Forms and Components that are Windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner I hope you don't answer the fourth question as I don't have enough motivation and I want to keep the little I have. Note: Sorry for my English, I am self taught and never used the language with native speakers so expect so errors. I have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. Thank you for your time.

    Read the article

  • Xlib: extension "GLX" missing on display ":0". Asus k53s

    - by Steve
    I had everything working fine, I use a number of openGL graphics software, example pymol, mgltools, vmd, ballview, rasmol etc. All of these give the error: Xlib: extension "GLX" missing on display ":0". and fail to initialize. I have an i7 asus k53s with nvidia gforce I need these to do my work. I tried the umblebee fix, and just removing all nvidia drivers, or rolling back. I do not know why these were working then stopped, but did notice a new nvidia console in the mu which I assume was from enabling the automatic nvidia feeds, etc? I also played with the xorg, however have no clue what settings are valid. In addition, the display is 50% of the time not recognized now? It just gives a geeic 640x480 and I have to login and out 10 times to get it to return to a normal setting. When I try and set it manual, there is no other setting allowed from the settings menus, and the terminal changes just get re set every time I log out?

    Read the article

  • How do I install the nvidia driver for a GeForce 9300M?

    - by Alex
    Yesterday i've installed ubuntu 11.04 (instead of 10.10). And i need to install nvidia driver which supports opengl 3.3 In 10.10 i did it in a such way: Ctrl+alt+f1 login sudo /etc/init.d/gdm stop sudo sh driver.run startx Not it doesn't work. Because ctrl+alt+f1 doesn't show login screen. Just only black screen. I've googled this problem. Some people have it but no one knows how to solve. Sometimes people say that it is connected with video card or driver. But i have GeForce 9300M G, and i've activated a standard driver. Anyway, it worked in 10.10, but doesn't work now. The main problem is that i need to kill xserver to install this driver. killing the process just only restarts xserver. Also, I've tried to /etc/init.d/gdm stop in GUI-console. It says that "Fake initctl,doing nothing". Google didn't helped in this case too. Any ideas to install that driver?

    Read the article

  • What are the most necessary non-language specific things a programmer needs to know?

    - by Josh
    I've been thinking about this a lot lately. Right now I work at a little web company, am almost done with school, and have written an iPhone app, but I'm not sure what else I need to focus my learning energies on. I've decided I want to do software programming, so I've been actively reading everything I can get my hands on that deals with Objective-C / C++ (Cocoa, OpenGL, etc). But those are not the things I'm talking about. I know I need to "master" a language or two. What I'm talking about are the other "things". Things such as learning and using source control, design patterns, etc. What thing (or things, just one per response), would you say I should concurrently be focusing on? You can consider in your answer that I'm wanting to do the aforementioned career path, but you don't have to. I just want a nice list of things to research, and actually use in my career. Also, can someone help me with tagging this? I'm not sure what exactly would be a good tag for such a question.

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Errors in ~/.xsession-errors

    - by Kuberan Naganathan
    I'm getting errors in ~/.xession-errors. I'm running ubuntu 12.04 Many apps fail to run without mention of problems in the .xsession-errors file. I looked around and tried to resolve issues myself but failed so far. I have to say it's possible that the issue is related to me mounting /home on another partition. (I say possibly because stuff worked ok for a while.) Fortunately my .xsession-errors file is small enough to post here. Thanks in advance for the help: gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero Initializing composite options...done Initializing opengl options...done Initializing decor options...done ** Message: applet now removed from the notification area Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/kuberan/.compiz/session/10754cf696d335e98e13471376531156900000024960034" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:2560): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Setting Update "main_menu_key" Setting Update "run_key" Setting Update "icon_size" ** Message: moving back from GtkStatusIcon to indicator

    Read the article

  • Weird UIView transforms in Retina iPhone

    - by ggambett
    I'm having a problem I don't understand. I'm developing an OpenGL app for iOS. Because at some points I want to force the orientation of the view programatically, and Apple for whatever reason doesn't make it easy (or even possible), I'm doing it by hand. I return always NO in shouldAutoRotateToInterfaceOrientation, and when I want to change the orientation (to portrait, for example), I do something like this in the UIView: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 768, 1024)]; This works fine. In order to support Retina devices, I started checking [UIScreen mainScreen].scale, and setting self.contentScaleFactor accordingly. I also modified the code above to account for the new dimensions, like this: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 2*768, 2*1024)]; Same rotation, different size. The weird result with this is that I get a "screen" with the right size, but offsetted half a screen to the bottom and the left. To correct for this, I need to do the following: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(-768, -1024, 2*768 - 768, 2*1024 - 1024)]; This works, but it's ugly, I also need to make similar corrections when I get touch coordinates, and worst of all, I don't understand what's going on or why the above "correction" works. Can anyone shed some light on this issue?

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >