Search Results

Search found 8172 results on 327 pages for 'vector graphics'.

Page 7/327 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Blank screen during boot after clean Ubuntu 11.10 install (Intel N10 graphics)

    - by Coen
    After a clean install of Ubuntu 11.10 on my Asus eee PC 1005p, Ubuntu seems to boot correctly, except for initialization of the LCD screen. What I observe: I choose Ubuntu 11.10 in the GRUB 2 menu A blank screen with a blinking cursor in the top left of the screen, for 15-20 seconds. The ubuntu logo with 5 red dots in the center of the screen, for 1 second. The LCD screen is entirely blank The startup sound plays (Ubuntu is configured to auto-login) Still, the LCD screen is entirely blank. When I press Fn-F8 (the switch between LCD screen and external VGA), the LCD screen shows my desktop correctly and everything seems to work fine. Except for the adjust contrast buttons (Fn-F5 and Fn-F6), these seem to cycle through random brightness modes. Something like: 0% - 50% - 20% - 0% - 20% - 0% Any ideas what's causing this or how to solve this? coen@elpicu:~$ lspci -v 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7e00000 (32-bit, non-prefetchable) [size=512K] I/O ports at dc00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at f7d00000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0 Memory at f7e80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied>

    Read the article

  • Do I need a Point and a Vector object? Or just using a Vector object to represent a Point is ok?

    - by JCM
    Structuring the components of an engine that I am developing along with a friend (learning purposes), I came to this doubt. Initially we had a Point constructor, like the following: var Point = function( x, y ) { this.x = x; this.y = y; }; But them we started to add some Vector math to it, and them decided to rename it to Vector2d. But now, some methods are a bit confusing (at least in my opinion), such as the following, which is used to make a line: //before the renaming of Point to Vector2, the parameters were startingPoint and endingPoint Geometry.Line = function( startingVector, endingVector ) { //... }; I should make a specific constructor for the Point object, or there are no problems in defining a point as a vector? I know a vector have magnitude and direction, but I see so many people using a vector to just represent the position of an object.

    Read the article

  • Calculating a circle or sphere along a vector

    - by Sparky
    Updated this post and the one at Math SE (http://math.stackexchange.com/questions/127866/calculating-a-circle-or-sphere-along-a-vector), hope this makes more sense. I previously posted a question (about half an hour ago) involving computations along line segments, but the question and discussion were really off track and not what I was trying to get at. I am trying to work with an FPS engine I am attempting to build in Java. The problem I am encountering is with hitboxing. I am trying to calculate whether or not a "shot" is valid. I am working with several approaches and any insight would be helpful. I am not a native speaker of English nor skilled in Math so please bear with me. Player position is at P0 = (x0,y0,z0), Enemy is at P1 = (x1,y1,z1). I can of course compute the distance between them easily. The target needs a "hitbox" object, which is basically a square/rectangle/mesh either in front of, in, or behind them. Here are the solutions I am considering: I have ruled this out...doesn't seem practical. [Place a "hitbox" a small distance in front of the target. Then I would be able to find the distance between the player and the hitbox, and the hitbox and the target. It is my understanding that you can compute a circle with this information, and I could simply consider any shot within that circle a "hit". However this seems not to be an optimal solution, because it requires you to perform a lot of calculations and is not fully accurate.] Input, please! Place the hitbox "in" the player. This seems like the better solution. In this case what I need is a way to calculate a circle along the vector, at whatever position I wish (in this case, the distance between the two objects). Then I can pick some radius that encompasses the whole player, and count anything within this area a "hit". I am open to your suggestions. I'm trying to do this on paper and have no familiarity with game engines. If any software folk out there think I'm doing this the hard way, I'm open to help! Also - Anyone with JOGL/LWJGL experience, please chime in. Is this making sense?

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Ubuntu 12.04.3 - Graphics Driver: Default vs Nvidia 319-recommended vs Nvidia 319-updated

    - by Navraj
    Background: I switched from default driver to Nvidia-319-recommended. I am guessing that this update has caused issues with Keyboard shortcuts, battery status icon disappearing as well as power management issues as speculated by others. Closing laptop lid no longer suspends laptop - It has to be manually done by licking 'suspend' before closing lid. Question: How do you restore the original/default graphics driver? Thanks for your help. Regards

    Read the article

  • Graphics driver being reported as Gallium 0.4 on llvmpipe (LLVM 0x300) instead of intel

    - by schonjones
    I have an integrated Intel 945GM in a Toshiba laptop. Previously the graphics driver was reported correctly, but at some point it has changed. I've noticed general poor performance and though it should meet minimum requirements for unity 3d is using unity 2d. Under the details panel in system settings it is now reporting Gallium 0.4 on llvmpipe (LLVM 0x300). any help would be appreciated. I have searched google for hours trying to find an answer.

    Read the article

  • Dual Frame Buffer on Ubuntu 12.04 Intel HD Graphics 4600 i7-4770

    - by user3692512
    I have 2 monitors connected to the PC, one in HDMI, one in DVI. I have Intel integrated graphics HD4600 Now as far my understanding, both the monitors is connected at the same framebuffer /dev/fb0 How can I detach them and create 2 frame buffers at startup, so that I can directly write to the second monitor, by writing on the /dev/fb1, and not hamper the /dev/fb0, so that x-server can run normally on that?

    Read the article

  • Creating a mask from a graphics context

    - by Magic Bullet Dave
    I want to be able to create a greyscale image with no alpha from a png in the app bundle. This works, and I get an image created: // Create graphics context the size of the overlapping rectangle UIGraphicsBeginImageContext(rectangleOfOverlap.size); CGContextRef ctx = UIGraphicsGetCurrentContext(); // More stuff CGContextDrawImage(ctx, drawRect2, [UIImage imageNamed:@"Image 01.png"].CGImage); // Create the new UIImage from the context UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); However the resulting image is 32 bits per pixel and has an alpha channel, so when I use CGCreateImageWithMask it doesn't work. I've tried creating a bitmap context thus: CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGContextRef ctx =CGBitmapContextCreate(nil, rectangleOfOverlap.size.width, rectangleOfOverlap.size.height, 8, rectangleOfOverlap.size.width , colorSpace, kCGImageAlphaNone); UIGraphicsGetImageFromCurrentImageContext returns zero and the resulting image is not created. Am I doing something dumb here? Any help would be greatly appreciated. Regards Dave

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • Problems with graphics of Sony Vaio Z

    - by dpcat237
    Hello, I have problem with my Sony Vaio Z VPCZ1. It has physical selector of GPUs which Linux kernel not detect. So after GRUB I see black display (I tried different distributions of Ubuntu and other Linux OS). I read in Ubuntu 10.10 was solve same problem with hybrid graphics but not in my case ^^ I found solutions (not easy at do) for oldest models. But I'm not expert in Linux and before I prefer ask people with more experience. Somebody can help me? Someone installed Ubuntu in same laptop? PS. for more information I found different webs: http://goo.gl/ktvq Thanks Regards

    Read the article

  • HDMI with Intel Mobile 4 Graphics Controller

    - by roel
    When connecting my TV over HDMI, it shows up in display settings and xrandr: Aspire1825PTZ:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) Aspire1825PTZ:~$ xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 256mm x 144mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected (normal left inverted right x axis y axis) 720x576 50.0 720x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) DP3 disconnected (normal left inverted right x axis y axis) TV1 disconnected (normal left inverted right x axis y axis) However, the resolution is way too low and the screen remains black. Everything works fine in Windows, but I'd rather not reboot just to watch a film... Could anyone help me please?

    Read the article

  • Low level Linux graphics

    - by math4tots
    For educational purposes, I'd like to write an application on a Linux environment that can process keyboard events and draw graphics without huge dependencies like X or SDL. I presume that this must be possible, because X and SDL are just programs themselves, so they must rely on other methods inherent to the environment. Is this understanding correct? If so, where might I learn to write such a program? My limited experience tells me that it would involve making calls to the kernel, and/or writing to special files; however, I haven't been able to find any tutorials on the matter (I am not even sure what to Google). Also, in case it is relevant, I am running Debian Squeeze on Virtualbox. I have used a netinst cd without networking, so there isn't much installed on it currently. I will install gcc, but I am hoping I can get by with nothing more.

    Read the article

  • Minecraft flickers sometimes and colors get buggy on an Intel HD Graphics 3000

    - by Oskar
    I really like Ubuntu, but I always had to switch back to Windows just because I couldn't get my Intel HD Graphics 3000 to work. So, 11.10 came out and I'm trying to get things work in this update, so I can finally stay with Ubuntu and use it. Anyways, things seem to be more stable here, but they're still a bit fishy. I'm doing tests with Minecraft. Currently, there's only 1 minor bug. The game flickers from time to time and the colors get buggy or something I read that maybe I should update to kernel 3.1? Maybe 32-bit Ubuntu is better? It was impossible to play Minecraft in 11.04, but 11.10 is so much more stable.

    Read the article

  • Drawing graphics in Java game

    - by wolf
    I am quite new to game development, so here is a question (maybe a stupid one): In my sidescroller i have a bunch of different graphics objects that i need to draw (player, background tiles, creatures, projectiles etc). Most tutorials i've read so far show that each object has its own draw method, which is then called from some other method. What if I had one method that does all the drawing? Lets say i keep all my objects in an array or queue (or multiple arrays) and then go through each of them, get an image and draw it. So basically would it be better (and why) to have each object have its own draw method or one method that does all the drawing? Or does it matter at all? I feel like the second option is more comfortable, because then all the stuff to do with drawing would be in one place...

    Read the article

  • Graphics on boot split into three sections

    - by a sandwhich
    I just installed 13.04 onto a new laptop because of the ease of install with the uefi bios. When I boot the system though, the screen is split into three sections each about 640x200 at the top of the screen, with the active terminal mirrored across them. Although I can login, startx fails due to something about a file. I have tried booting with vga=711 and normal nomodeset with no success. Booting the live usb I originally installed from results in the same issue. The graphics driver in the xorg.conf.something from what I can make out is set to vesa, but it could be set to some other four character value that is similar to vesa, hard to tell. How can I fix this? One thing to note, the laptop has two dedicated GT 750m's, along with the intel 4000 built into the processor. This is what it looks like, the purple box is what the grub2 menu was in before boot.

    Read the article

  • Display resolution problem with Sony TV and Intel integrated graphics

    - by user96195
    I am trying to set the correct display resolution for my Sony TV (KDL-32V2000, native resolution 1366x768) connected via HDMI to my HTPC running Ubuntu 12.04. I have a Intel Core i3-530 and Intel mobo (DH57JG), so no proprietary graphics drivers. The problem is that I can't get the correct resolution to display on the TV. Initially I only had 1024x768 (or similar) as a maximum resolution, which was not displayed properly. I tried a few steps, including generating an xorg.conf (initially didn't have one) and adding the segment as described in this post regarding this particular TV. I couldn't get this to work, and at this stage have reverted to running without an xorg.conf. Another post suggested upgrading to kernel 3.5, which did give me a 1920x1080 resolution option. This results in the TV cutting off a fair bit of the edges of the screen. My Dell laptop with ATI drivers recognises the TV screen and works well via HDMI. Any idea how to proceed?

    Read the article

  • Intel HD graphics on Ubuntu

    - by tiax
    My girlfriend got a new Subnotebook for Christmas (Sony Vayo VPCY21S1E), which comes with an "Intel HD graphics" vga adaptor. When I try to boot the Ubuntu installer from USB, the screen goes blank after a short while, before I even see the Ubuntu logo. However, when I select "nomodeset" in the boot options, I can boot it to the CLI login prompt. When I start X, though, that only works in VESA mode (I've read Intel eventually got rid of Usermode Mode Setting and only offers KMS now, which I've disabled to get it to boot). What can I do to enable a) higher resolution than 1024x768 in VESA b) hardware acceleration for compiz, video playback, etc?

    Read the article

  • runt integrated intel graphics card + nvidia pcie graphics card?

    - by Roberto
    Its days I am searching for information/help. Is this possible at all? I am running Ubuntu 11.10 on an intel core i3 530 on a intel dh55hc mainboard. I have 2 monitors attached, one on dvi and one on hdmi. I want to ad a third or eaven a fourth monitor using an additional graphics card nvidia in pcie slot. I cant get an image on all monitors at the same time. I am wondering if this is possible at all? Any ideas where I can get information about this? Thanks in advance Roberto

    Read the article

  • Ubuntu 12.10 AMD/Intel Hybrid Graphics not working

    - by Marian Lux
    On Ubuntu 12.04 my Sony Vaio VPCSE with Intel® HD Graphics 3000 and AMD Radeon™ HD 6630M worked fine with the Catalyst Control Center version 12.6. Also the switching between integrated and discrete graphic card was working. In both cases, I followed this tutorial. But it is not working on Ubuntu 12.10. I tested the tutorial with the Catalyst Control Center from the Ubuntu Software Center, the version 12.8 and version 12.9. Always the same problem: After installation process I am able to boot in the login-screen. But after entering the password for my username, only the background-image appears. Unity seems not to be starting. I am only able to reach the context-menu by right clicking the mouse button. I also tried his fix, but is also does not work for me. Any ideas what to do to fix this problem?

    Read the article

  • failed to get i915 symbols, graphics turbo disabled error on boot

    - by Gaurav Butola
    Whenever I boot my laptop, I see this message and it makes the boot process very slow as my screen stays black for a long time before this message appears. It shows just for a split second but today It got worst when my system couldn't boot and stuck on this error, I did several reboots but still couldn't pass this boot error message, then after sometime it fixed itself and now I can use my system as normal. I didn't pay much attention to the error when It was there for just a split second and making my boot process slow, but now that it has stopped me from booting into my system, I would like to know why this error occurring. Error-- ...failed to get i915 symbols, graphics turbo disabled....

    Read the article

  • Samsung 7 Graphics Nightmare

    - by tanner
    I just bought a Samsung 7 laptop with the Amd Radeon hd 6490m an I installed the driver. Everything was working smooth and it was rendering nice until I rebooted it. I noticed that it wouldn't boot my favorite game because of a GLSBadRenderRequest. So I went over to the AMD Catalyst program, and it wouldn't fire up because it couldn't find the graphics card!! What do you think is going on? Oh, and that was the latest driver straight from AMD. Im running 12.04lts.

    Read the article

  • Graphics card support

    - by Daryl
    Brand new user to Ubuntu and Linux. Quick question that I think already know the answer to: Does Ubuntu 11.10 have an updated driver for my graphics card? I was planning on being able to s-video out and watch videos on my tv like I could when I had Vista installed. daryl@daryl-Aspire-3050:~$ lspci -nn | grep VGA 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RS482 [Radeon Xpress 200M] [1002:5975] ATI Radeon Xpress 1100 is the actual card. As it is, Ubuntu is not recognizing when I plug an s-video cable into my laptop. I've narrowed it down to using what I think is a generic driver because all of the s-video enable commands I found and tried have failed.

    Read the article

  • Different iPhone screen resolutions and game graphics

    - by Luke
    We are developing a 2D game for iPhone using cocos2d-x. The artists are drawing the raster graphic for a resoluion of 640x960. For older iPhone devices, those that have a resolution of 320x480, should we provide a completely new set of graphics, to be adapted to the smaller resolution? I was thinking of simply scaling the whole scene of a factor of 2. That would save us the time to write a specific set of graphic elements for the smaller resolution. What is the best practices? How do you guys handle the different screen resolution w.r.t. the graphic part of the game?

    Read the article

  • ATI Radeon XPress 1200 Graphics Card Driver Install Problems

    - by 16trohrt
    I've got an ATI Radeon XPress 1200 Graphics Card, and the default driver isn't cutting it. I downloaded the proprietary driver .run file ("ati-driver-installer-9-3-x86.x86_64.run") from AMD, and tried to run it with sudo sh ati-driver-installer-9-3-x86.x86_64.run. Everytime I try and to it I get this error: Error: ./default_policy.sh does not support version default:v2:i686:lib::none:3.8.0-25-generic; make sure that the version is being correctly set by --iscurrentdistro I don't know what's throwing it, and I would really appreciate some help. Thanks in advance! :)

    Read the article

  • ASUS X53S Intel Graphics and Unity 3D

    - by Nordlöw
    I just bought a ASUS X53S. Everything works flawlessly except that I can't run Unity 3D on it because NVIDIA Linux drivers currently doesn't support Optimus. So I'm stuck with the other integrated Intel Graphics Adapter. I'm already installed BumbleBee but it doesn't help with Unity 3D thing. Will the Xorg driver ever support OpenGL and especially GLX_texture_from_pixmap so that Unity 3D will work with it? The Intel driver is really snappy with Unity 2D and seems to support most other X acceleration features such as smooth scrolling.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >