Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 74/327 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Why does unity obj import flip my x coordinate?

    - by milkplus
    When I import my wavefront obj model into unity and then draw lines over it with the same coordinates in the obj file, the x coordinate is negated. I don't see any option in the importer that might be doing that. And I'm using the same localToWorldMatrix and the same coordinate data in the .obj file. Hmmm GL.PushMatrix(); GL.MultMatrix(transform.localToWorldMatrix); CreateMaterial(); lineMaterial.SetPass(0); GL.Color(new Color(0, 1, 0)); GL.Begin(GL.LINES); GL.Vertex(p1); GL.Vertex(p2); GL.Vertex(p2); GL.Vertex(p3); //... GL.End(); GL.PopMatrix();

    Read the article

  • After 10.10 -> 11.04 upgrade, can only login via Classic (No Effects)

    - by Ryan P.
    Yesterday I upgraded from 10.10 to 11.04, everything seemed to go okay until immediately after login: the desktop goes into a "corrupted" looking state (similar to having too high resolution set). I can see some kind of movement by moving the mouse around/right clicking, and can enter text terminals via ctrl + alt + f1 It does this in both plain "Ubuntu" and "Ubuntu Classic", and only seems to login/startup properly with Ubuntu Classic (No Effects). I have checked my video card (Radeon X600) and run the unity support test which passes with all "yes" results (Unity supported: yes): /usr/lib/nux/unity_support_test -p I have tried re-installing my Ubuntu desktop: rm -rf .gnome .gnome2 .gconf .gconfd .metacity sudo apt-get remove ubuntu-desktop sudo apt-get install ubuntu-desktop With no success. I can workaround for now with Classic (No Effects), but I'd really like to find the root problem. Any suggestions on what else to try would be appreciated!

    Read the article

  • When to use Euler vs Axis angles vs Quaternions?

    - by manning18
    I understand the theory behind each but I was wondering if people could share their experiences in when one would use one over the other For instance, if you were implementing a chase camera, a FPS-style mouse look or writing some kinematic routine, what would be the factors you consider to go with one type over the other and when might you need to convert from one form of representation to the other? Are there certain things that only one system can do that the others can't? (eg smooth interpolation with quaternions)

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • Book suggestions for GUI related topics [closed]

    - by asrijaal
    In the past, I've developed more on the server side, so no big GUI/graphic programming topics have crossed my way so far. I'm switching a little bit to the mobile plattforms (iOS, android) and would like to gather more info on the visual/graphic side. Using the SDK controls/widgets isn't that challenging and I would love to get deeper into the view/presentation layer. Animations, drawing itself, probably OpenGL. Any books that would take me a little deeper in understanding how drawing actually works would be helpful.

    Read the article

  • How can I support scrolling when using batched rendering for my tiles?

    - by dardanel
    I have tiled map 100*75 and tiles are 32*32 pixel.I want to use batching for performance .I don't figure it out , because of my game needs scrolling and every frame i draw 22*16 tiles (my screen is 20*16 tile) .I thought that batching tiles for every frame .Is it good or any suggestion? edit :to more clarify I want to use occlusion culling and batching at the same time.I thought that drawing only visible areas and batching them together .But there is a something i couldn't figure out .When scrolling screen with translate matrix , if one row become invisible , I bind new row and batch them again.Every batched objects needs to buffer again.So I batch tiles and buffer to VBO every time when one row become invisible .I don't know these way is efficient or not .This is my question .And i am open to any suggestions.

    Read the article

  • Calculating a circle or sphere along a vector

    - by Sparky
    Updated this post and the one at Math SE (http://math.stackexchange.com/questions/127866/calculating-a-circle-or-sphere-along-a-vector), hope this makes more sense. I previously posted a question (about half an hour ago) involving computations along line segments, but the question and discussion were really off track and not what I was trying to get at. I am trying to work with an FPS engine I am attempting to build in Java. The problem I am encountering is with hitboxing. I am trying to calculate whether or not a "shot" is valid. I am working with several approaches and any insight would be helpful. I am not a native speaker of English nor skilled in Math so please bear with me. Player position is at P0 = (x0,y0,z0), Enemy is at P1 = (x1,y1,z1). I can of course compute the distance between them easily. The target needs a "hitbox" object, which is basically a square/rectangle/mesh either in front of, in, or behind them. Here are the solutions I am considering: I have ruled this out...doesn't seem practical. [Place a "hitbox" a small distance in front of the target. Then I would be able to find the distance between the player and the hitbox, and the hitbox and the target. It is my understanding that you can compute a circle with this information, and I could simply consider any shot within that circle a "hit". However this seems not to be an optimal solution, because it requires you to perform a lot of calculations and is not fully accurate.] Input, please! Place the hitbox "in" the player. This seems like the better solution. In this case what I need is a way to calculate a circle along the vector, at whatever position I wish (in this case, the distance between the two objects). Then I can pick some radius that encompasses the whole player, and count anything within this area a "hit". I am open to your suggestions. I'm trying to do this on paper and have no familiarity with game engines. If any software folk out there think I'm doing this the hard way, I'm open to help! Also - Anyone with JOGL/LWJGL experience, please chime in. Is this making sense?

    Read the article

  • Theme sometimes fails to load on some UI elements

    - by Neil
    I have no idea if this error as been seen before or not, I have looked around and found people saying they have errors that sound the same but none of the fixes work, my issue is that sometimes the theme does not load some parts of the UI, (the buttons and icons in this case) but the rest is just fine (window bars e.t.c) so I have no idea what the issue is, it works just fine for the most part but sometimes has this bug, if you restart the system it tends to fix it for some time so its hard to see how it could be a graphic card error, however I am very new to linux systems so I may be missing someting very fundamental here. Thank-you in advance. This image shows my problem

    Read the article

  • How to ask developers to think about high resolution screens

    - by WhiteWind
    I just got a ASUS N56VZ laptop, and it`s all good, but the screen resolution 1920x1080px. I am not asking, how to scale ui elements. If someone is interested I have found some tricks: increase font size in system settings increase unity dock size in MyUnity or system settings in modern Ubuntu versions. tweak userChrome.js of FireFox to make buttons|panels|icons larger add DefaultZoomLevel extension to FireFox to make it zoom pages initially. But all of it is miserable, because there are some big bugs: window decoration elements are way too small to pick them with mouse. I can't scale window easily and I can't position my cursor fast on the close|maximize buttons. Tuning lines like in sound volume dialog are hardly clickable at all. Unity top panel (status panel & tray) hardly can contain the bigger font, so it looks ugly, but icons are still the same. Sometimes I can`t read text, as it is cropped (and I cant scale some dialogs as it has fixed size) Chromium is not usable at all (ok, it's not Ubuntu problem, but the problem still exists) JAVA applications are not scalable (same as above) In FF I am able to get descent results in most cases, but multiplication of system font increase and browser font increase makes system controls (combobox, lists, drop down lists) extremely big, so I cant even control the zoom level on the page. IMHO, we should post a bug report (but what kind of bug?) and vote for it! The problem is even deeper, but at least we should ask developers to think about it. So my question is: how can I post a report (the right words and right place) and how can we (who already has that problem, or who want it to be solved before hardware upgrade) vote for faster solution. Any ideas?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Question about creating a sprite based 2-D Side Scroller with scaling/zooming

    - by Arthur
    I'm just wondering if anyone can offer any advice on how best to go about creating a 2-D game with zooming/scaling features akin to the early Samurai Showdown games. In this case it would be a side scroller a la Metal Slug, the zooming would come in as more enemy sprites entered the screen, or when facing a large sized boss. A feature that would be both cosmetic as well as functional to the game. I've done some reading and noticed a few suggestions that included drawing different sized sprites, a standard size and zoomed out size. Any thoughts? Thanks for your time.

    Read the article

  • 2D Polygon Triangulation

    - by BleedObsidian
    I am creating a game engine using the JBox2D physics engine. It only allows you to create polygon fixtures up to 8 vertices, To create a body with more than 8 vertices, you need to create multiple fixtures for the body. My question is, How can I split the polygons a user creates into smaller polygons for JBox2D? Also, what topology should I use when splitting the polygons and why? (If JBox2D can have up to 8 vertices, why not split polygons into 8 per polygon)

    Read the article

  • NVIDIA Additional Drivers Empty - maximum resolution 640x480 - Driver disappears

    - by Hannibal
    EDIT: Optimus card. For resolution please read this thread: "You do not appear to be using the nvidia x server"(screenshot included) And this: Ubuntu 11.10 problem with Nvidia Thanks! I know, I know yet another NVIDIA question. So I did all the research. I uninstalled and installed nvidia-settings and drivers and nvidia-current from PPE repositories which are the most updated ones. I executed nvidia-xconfig. I have two major problems. One: Additional Drivers setting is empty! It doesn't contain any driver although one is installed. I have executed apt-get update too. But still the list is empty. Two: If I execute nvidia-xconfig it will properly configure an xorg.conf file. I restart but the maximum resolution I got is 640x480. I tried the xrandr but I can't add any resolution to display LVDS1. Some weird error occurs. So I can't add a proprietary driver and I can't boot in with the xorg file created by Nvidia... What can I do? With some work ( unistall nvidia-current and install libgl1-mesa-glx I was able to activate some kind of usage of my card because the resolution got better... and I added bumbelbee to because I have multiple video cards... ) but still the list is empty. I don't know what to do at this point??!!! Also: this is the most important part. When I first installed my ubuntu yesterday 11.10 one I saw the driver!! The driver was there... And then I ofc updated every package from internet.. And after that it was gone. And I can't bring it back. So there must be something wrong with one of the updates. But which???? Thanks for any extra info you can provide! I'm really desperate to solve this issue.

    Read the article

  • Rotate Rigged and Animated Scene?

    - by Nick
    I have a rigged and animated mesh that I need to import into Unity. We several characters that all use the same script, and access their bones to do procedural animations as well. The problem is that the new model I was given is facing the wrong way. Instead of facing forward, the model is facing the right.. Is there any way to rotate the model with it's animations without screwing it up, so that it will import properly in unity facing forward? Because of the way it was done, selecting everything in the scene and just rotating it by 90 degrees ruins some of the animations, so I need a program that can fix this.

    Read the article

  • After 10.10 -> 11.04 upgrade, can only login via Classic (No Effects)

    - by Ryan P.
    Yesterday I upgraded from 10.10 to 11.04, everything seemed to go okay until immediately after login: the desktop goes into a "corrupted" looking state (similar to having too high resolution set). I can see some kind of movement by moving the mouse around/right clicking, and can enter text terminals via ctrl + alt + f1 It does this in both plain "Ubuntu" and "Ubuntu Classic", and only seems to login/startup properly with Ubuntu Classic (No Effects). I have checked my video card (Radeon X600) and run the unity support test which passes with all "yes" results (Unity supported: yes): /usr/lib/nux/unity_support_test -p I have tried re-installing my Ubuntu desktop: rm -rf .gnome .gnome2 .gconf .gconfd .metacity sudo apt-get remove ubuntu-desktop sudo apt-get install ubuntu-desktop With no success. I can workaround for now with Classic (No Effects), but I'd really like to find the root problem. Any suggestions on what else to try would be appreciated!

    Read the article

  • Why is Firefox changing the color calibration of this image?

    - by eoinoc
    The symptom of my problem is that the same hex color in a PNG image does not match the CSS-defined color defined by the same hex code. This problem only happens in Firefox when gfx.color_management.mode is set to 2 (tagged images only) rather than 0 (off). (Firefox ICC color correction described here). The image is http://dzfk93w6juz0e.cloudfront.net/images/background-top-light.png which at the bottom has the color #c8e8bd. However, the shade of green is different to that color when Firefox color calibration is enabled. Is this image inadvertently "tagged" for color correction?

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • What's wrong with this OpenGL model picking code?

    - by openglNewbie
    I am making simple model viewer using OpenGL. When I want to pick an object OpenGL returns nothing or an object that is in another place. This is my code: GLuint buff[1024] = {0}; GLint hits,view[4]; glSelectBuffer(1024,buff); glGetIntegerv(GL_VIEWPORT, view); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix(x,y,1.0,1.0,view); gluPerspective(45,(float)view[2]/(float)view[4],1.0,1500.0); glMatrixMode(GL_MODELVIEW); glRenderMode(GL_SELECT); glLoadIdentity(); //I make the same transformations for normal render glTranslatef(0, 0, -zoom); glMultMatrixf(transform.M); glInitNames(); glPushName(-1); for(int j=0;j<allNodes.size();j++) { glLoadName(allNodes.at(j)->id); allNodes.at(j)->Draw(textures); } glPopName(); glMatrixMode(GL_PROJECTION); glPopMatrix(); hits = glRenderMode(GL_RENDER);

    Read the article

  • Ubuntu 12.04 - NVIDIA GT 525M Error

    - by talha06
    I removed all packages associated with NVIDIA from Synaptic Package Manager and installed nvidia-current. When I click NVIDIA-Settings, gives an error like this: You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server. Please help me, I need your helps. My laptop model is Dell Inspiron N5110 using Nvidia GeForce GT525M..

    Read the article

  • Ubuntu live cd : black screen and blinking cursor

    - by IFasel
    I try to install ubuntu 12.04 on my computer. I can get to the purple screen on the live cd but then, if I choose "Installing Ubuntu", I have a black screen with a cursor blinking (and nothing else happens). My PC : acer aspire M3920, CPU i5-2300, 8 Gb RAM, NVIDIA gt 405. What I already tried : I tried with 12.04 and 13.04 daily build I tried with a live usb and with a live dvd I tried the following boot options : nomodset, acpi=off I googled a lot and it seems that it could be a graphic card problem. Do you know any other boot options that I could try ? UPDATE This is not a duplicate : I've tried all the common boot options (nomodeset, noacpi...) and it doesn't change anything. With the option "no splash" (instead of "quiet splash"), I can see what happens before the forever-blinking cursor : [sdg] no caching mode present [sdg] assuming drive cache : write trough ata8.00: excetion Emask 0x52 ... frozen ata8 : SError : { RecovData RecovComm UnrecovData...} ata8.00 : failed command : IDENTIFY PACKET DEVICE ... ata8.00 : status : { DRDY } ata8 : hard resetting link Does somebody know what it means ? N.B. astonishingly, Puppy Linux boots fine (but Debian, Fedora and Ubuntu do not) Solution In fact, it was not a graphic card problem. I had to disconnect the dvd drive and connect it to another free sata connector (I don't really understand why Ubuntu had trouble with this connector and Windows 7 not). After that, everything worked fine.

    Read the article

  • my mouse blinks!

    - by Jeffrey
    hello I'm new in linux systems and I have the same problem in every distro... the thing is that my mouse every time i move it or do anything with it blinks!!! I think it's a xorg problem or a driver problem and the only way it does not do that is when i start with the low-resolution error using a xorg.conf from another graphic chipset and i think that isn't the best choise xDD please if you can help me I have a ati rage 128, thanks a lot for your help people n.n

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • Cannot boot from K/Ubuntu install disk on my UEFI system

    - by user93241
    I just got a new system and have been trying to get it set up w/ Win7 & Kubuntu dual-boot, but I've got a major problem. The BIOS of my motherboard (an Asus Crosshair 990FX) is strictly UEFI -- there is no legacy support mode available. I've been reading up on how to get Kubuntu installed in UEFI mode but no matter what I try I cannot seem to even boot into my install CD/USB key properly. I can get as far as the selection screen ("Try Kubuntu", "Install Kubuntu"...) but this screen starts off not appearing correctly. If I try moving the cursor around it sometimes seems to correct itself and show me my choices. But once I select "Try Kubuntu" it starts loading, the screen goes black and then proceeds to flicker -- about once every 5-10 seconds or so. This continues indefinitely. I've tried this with both Kubuntu & Ubuntu installation media, even the AMD64+Mac Ubuntu variety that is supposed to be a lot more flexible w.r.t. UEFI. The only hint I've had that the system might have booted correctly is a little drum sound that plays when booting from the Ubuntu install disk. Well, that and the fact that when I hit my system's power button it seems to shut down correctly, even ejecting the CD at the end. This might be a video driver issue; my system has two nVidia 550's, one of which is attached to my primary monitor. (The secondary isn't hooked up yet.) I'll keep looking over similar questions but any advice would be greatly appreciated. UPDATE: I've tried booting into my 12.04 install CD twice now, each time using two different options supplied by my BIOS. One seemed to offer the ability to boot into my CD under UEFI mode -- this didn't even produce the initial boot menu. The other method offers the ability to boot into my CD NOT under UEFI mode. This DOES produce the boot menu, but after this point it seems I still cannot get to a proper video mode to see what's going on.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >