Search Results

Search found 5806 results on 233 pages for 'c graphics'.

Page 36/233 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Does the Wacom Bamboo Pen & Touch work out of the box?

    - by Emilien
    Is there any tweaking involved in Ubuntu 10.10 to make the Wacom Bamboo Pen & Touch work? And is this hardware getting some love from the new multitouch framework? If there's no multitouch support for it, then I'd fall back on the simpler (and cheaper) Wacom Bamboo Pen (to draw, no multitouch)... ENAC's general list of Linux multitouch devices states the following regarding Wacom: "The 'wacom' kernel driver handles these, and is undergoing work to make it compliant with the kernel multitouch protocol." But is this also compatible with Ubuntu's multitouch protocol (which I understand is a different effort than the kernel's)

    Read the article

  • how to get adobe flash fullscreen video fluid with an atom processor?

    - by Antoine Rodriguez
    My system has an atom N270 + intel i915 graphic card. Under Windows I can enjoy 720p bigbuckbunny youtube video fullscreen without any trouble. Under Ubuntu 12.04 I have laggy and choppy fullscreen video and choppy video when not fullscreen. I've seen that under ubuntu the cpu is almost always at 100% use. What I must do in order to have videos playing well under ubuntu ? I've already tried the following : Force flash gpu detection : (no result) : mkdir /etc/adobe echo "OverrideGPUValidation = 1" | sudo tee -a /etc/adobe/mms.cfg grub options (had results but not enough) : i915_enable_rc6=1 i915_enable_fbc=1 i915_lvds_downclock=1 pcie_aspm=force updated intel drivers (glasen ppa) Using chrome instead of firefox (had impact but not enough)

    Read the article

  • Are nvidia drivers necessary?

    - by Shubham Chaudhary
    The new Ubuntu 14.04 comes with nvidia driver options. My system(Dell XPS) uses nvidia-331. For starters it messed up my text font size. It is so freakishly small with nvidia drivers on. So my question is: Are these drivers really necessary? What performance gain do they provide? Will it help me save some battery life? Basically what are these drivers doing that I was missing before (with nouveau I guess)?

    Read the article

  • 11.04 radeon screen tearing oddity

    - by Kevin Qiu
    I have a mobility radeon 5850 running 11.04 and am running catalyst 11.10 drivers. I noticed that whenever I have two windows open, one of top of the other, scrolling results in minor screen tearing, usually around the border of the window on the bottom. When I minimize or close the bottom the tearing goes away. I have tear free desktop enabled and tried looking for this specific issue but couldn't find anyone else with it. It happens in both Unity and Classic. Is there any fix for this?

    Read the article

  • How do I get Unity working again after installing the wrong video driver?

    - by Jesse
    First off I did something stupid. I downloaded a Nvidia driver even though I have an integrated chipset. After installation my unity was still working. However, when I restarted my computer I got an error message saying that I can't run unity. I uninstalled the Nvidia driver. I restarted my computer. Unity still does not work. In the terminal I type "unity" and everything looks okay until I get three error messages that say this: Xlib: extension "GLX" missing on display ":0.0". followed by: Compiz (opengl) - Fatal: glXCreateContext failed Compiz (bailer) - Info: Ensuring a shell for your session jesse@jesse-PC:~$ Cannot register the panel shell: there is already one running.

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • How do I fix my installation of ATI Catalyst Video Drivers in 12.04 LTS?

    - by Boris
    My graphic card is a Mobility Radeon HD 4200 Series. I tried these 2 answers from What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? But unfortunately, it does not work for me: When running the amd script, I get this error message: $ sudo sh ./amd-driver-installer-12-4-x86.x86_64.run ... DKMS part of installation failed. Please refer to /usr/share/ati/fglrx-install.log for details When checking this log file, I get: Uninstalling any previously installed drivers. Creating symlink /var/lib/dkms/fglrx/8.961/source -> /usr/src/fglrx-8.961 DKMS: add completed. Kernel preparation unnecessary for this kernel. Skipping... Building module: cleaning build area.... cd /var/lib/dkms/fglrx/8.961/build; sh make.sh --nohints --uname_r=3.2.0-24-generic-pae --norootcheck......(bad exit status: 1) [Error] Kernel Module : Failed to build fglrx-8.961 with DKMS [Error] Kernel Module : Removing fglrx-8.961 from DKMS Deleting module version: 8.961 completely from the DKMS tree. Done. [Reboot] Kernel Module : update-initramfs When checking with fglrxinfo, I get: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 138 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Trouble with UV Mapping Blender => Unity 3

    - by Lea Hayes
    For some reason I am getting nasty grey edges around the edges of rendered 3D models that are not present in Blender. I seem to be able to solve the problem by reducing the size of the UV coordinates within the part of the texture that is to be mapped. But this means that: I am wasting valuable texture space Loss of accuracy in drawn UV maps Could I be doing something wrong, perhaps a setting in Unity that needs changing? I have watched countless tutorials which demonstrate Blender default generated UV coordinates with "Texture Paint" which are perfectly aligned in Unity. Here is an illustration of the problem: Left: approximately 15 pixels of margin on each side of UV coordinates Right: Approximately 3 pixels of margin on each side of UV coordinates Note: Texture image resolution is 1024x1024

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Blending animations for more character movements

    - by Noob Saibot
    I am making a hack n slash 3rd person game. And I want the character movements to be more dynamic not like fighting games where you have a moves list. I want to animate tons of different animations and have them "Tween" between each other? Because I want the controls to not be keyboard mouse. I want it to be all keyboard. that way you have up to 10 inputs (All your fingers) to blend and morph animations to create more fluid movements. In the end this will almost be similar to characters typing a phrase or string of keys rather than move forward mouse look click to melee. My question is. Has anyone done this before and would someone go about trying to tween lets say one for key on the keyboard excluding Tab, Caps, R+Shift, L+Shift, Enter, R+Ctrl, L+Ctrl, L+Alt, R+Alt, Windows Key, and Menu. So thats all the numbers, letters and punctuation keys. Thats 46 keys gives me a combination of 46P1 = 5502622159812088949850305428800254892961651752960000000000L (used Python) and with a minimum entry value of 2 keypresses shortening to half. This is not humanly possible to create so many inique animations in one lifetime. But I'm guessing there is a reason this hasn't been done already. Or if I just used 10 basic keys. Maybe ASDF SPACE (RIGHT HAND) 456+0 (LEFT HAND KEYPAD) it would give me 3,628,800 posible unique animations.

    Read the article

  • Dell xps l502 optimus ... stuck at black screen after installing bumblebee

    - by Abdul Azzawi
    I am facing a problem after installing bumblebee when I try to start ubuntu normally the device get stuck on a black screen Can any one help me with the right configuration for the graphic card ... All i need is to run with full desktop effects currently even compizconfig effects arent working Also what should I have in the blacklist or is it correct the way bumblebee does it Running ubuntu Natty 11.04 32 bit Thanks

    Read the article

  • Screen tearing oddity when one window is on top of another with Radeon

    - by Kevin Qiu
    I have a mobility radeon 5850 running 11.04 and am running catalyst 11.10 drivers. I noticed that whenever I have two windows open, one of top of the other, scrolling results in minor screen tearing, usually around the border of the window on the bottom. When I minimize or close the bottom the tearing goes away. I have tear free desktop enabled and tried looking for this specific issue but couldn't find anyone else with it. It happens in both Unity and Classic. Is there any fix for this?

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • How to flip a BC6/BC7 texture?

    - by postgoodism
    I have some code to load DDS image files into OpenGL textures, and I'd like to extend it to support the BC6 and BC7 compressed formats introduced in D3D11. Since DirectX and OpenGL disagree about whether a texture's origin is in the upper-left or lower-left corner, my DDS loader flips each image's pixels along the Y axis before passing the pixels to OpenGL. Flipping compressed textures presents an additional wrinkle: in addition to flipping each row of 4x4-pixel blocks, you also need to flip the pixels within each block. I found code here to flip BC1/BC2/BC3 blocks, and from the block diagrams on MSDN it was easy to adapt the BC3-flipping code to handle BC4 and BC5. The BC6 and BC7 formats look significantly more intimidating, though. Is there a similar bit-twiddling trick to flip these formats, or would I have to fully decompress and recompress each block?

    Read the article

  • what is the simplest 3d software for unity?

    - by kdavis8
    Ive heard a lot about Daz studio, Poser, Maya, K-3d, Anim8or, Blender, and all the rest. My question is which one is the best choice in terms of simplicity and quality. price is not an issue really. I'm programming games in java for android mobile devices at the moment but i will eventually move onto larger platforms. I would like to utilize unity3d for the game programming itself and utilize a 3d modeling software just to create the game objects. I just need to know the best one to get started with from scratch or should i use a combination of multiple ones? Any insight for this would be great, thanks!

    Read the article

  • Algorithm to generate multifaced cube?

    - by OnePie
    Are there any elegant soloution to generate a simple-six sided cube, where each cube is made out of more than one face? The method I have used ended up a horrible and complicated mess of logic that is imopssible to follow and most likely to maintain. The algorithm should not generate reduntant vertices, and should output the indice list for the mesh as well. The reason I need this is that the cubes vertices will be deformed depending on various factors, meaning that a simple six-faced cube will nto do.

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • RenderState in XNA 4

    - by Shashwat
    I was going through this tutorial for having transparency which can be used to solve my problem here. The code is written in XNA 3 but I'm using XNA 4. What is the alternative for the following code in XNA 4? device.RenderState.AlphaTestEnable = true; device.RenderState.AlphaFunction = CompareFunction.GreaterEqual; device.RenderState.ReferenceAlpha = 200; device.RenderState.DepthBufferWriteEnable = false; I searched a lot but didn't find anything useful.

    Read the article

  • Orthographic Projection Issue

    - by Nick
    I have a problem with my Ortho Matrix. The engine uses the perspective projection fine but for some reason the Ortho matrix is messed up. (See screenshots below). Can anyone understand what is happening here? At the min I am taking the Projection matrix * Transform (Translate, rotate, scale) and passing to the Vertex shader to multiply the Vertices by it. VIDEO Shows the same scene, rotating on the Y axis. http://youtu.be/2feiZAIM9Y0 void Matrix4f::InitOrthoProjTransform(float left, float right, float top, float bottom, float zNear, float zFar) { m[0][0] = 2 / (right - left); m[0][1] = 0; m[0][2] = 0; m[0][3] = 0; m[1][0] = 0; m[1][1] = 2 / (top - bottom); m[1][2] = 0; m[1][3] = 0; m[2][0] = 0; m[2][1] = 0; m[2][2] = -1 / (zFar - zNear); m[2][3] = 0; m[3][0] = -(right + left) / (right - left); m[3][1] = -(top + bottom) / (top - bottom); m[3][2] = -zNear / (zFar - zNear); m[3][3] = 1; } This is what happens with Ortho Matrix: This is the Perspective Matrix:

    Read the article

  • 3DS Max exporting too many vertexes for model

    - by Juan Pablo
    I have a sample model of a cube and a buddha downloaded from internet in 3ds format which I can load correctly into my program and view them without problem, but wanted to try and create my own model. I created a simple box mesh in 3ds max, and exported it as .3ds (Converted to mesh - export as .3ds) When inspecting the .3ds file with a hex viewer, I was expecting to see 8 vertexes and 12 faces declared (as the model I downloaded from internet). But what i found was that it listed 26 vertexes, and 12 faces! And when I try to load that file with my .3ds viewer, my parser isn't detecting the face block (0x4120), which is strange because it worked for other objects downloaded from internet. Do I have to set any special property in order to export a 3ds file with minimum vertexes and a vertex-index list?

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • fglrx installation without success

    - by Lucio
    I followed the steps of this guide and it doesn't work. I've entered the following command and I had an output with dependencies error. sudo dpkg -i fglrx*.deb So I tried with gdebi instead, and it works. Now fglrx & fglrx-amdcccle & fglrx-dev are installed. The next step is Generate a new /etc/X11/xorg.conf file, but I can't do this due to the following reason: When I enter sudo aticonfig --initial -f the terminal show me this output: sudo: aticonfig: command not found I've installed the packages correctly or not? What I have to do to fix the problem? NOTE: I've not uninstalled nothing (drivers, config., etc.) before beginning the installation.

    Read the article

  • Launcher icon size and window behavior broken

    - by philipp
    I have installed the nvidia driver for my graphic card, just following some tutorials what works fine now. After this I could set the Icon size of the launcher, windows had a nice litte shadow, resolution was better and the windows showed up a nice effect when popping up an or when bringing to full-screen... But today the this was just gone after reboot. What could this be? Nvidia xserver-settings are availible. I installed and reinstalled wine1.5 via the apt-get commands, so this might broke something. What can do to fix this again? Greetings philipp EDIT: I went on searching and all i found was that this problem might be connected to the mode of unit, so there is 2d and 3d, but could also be something else, just because setting the mode brings no change. EDIT 2: the version of Ubuntu is: 12.04 and it is a 64 bit environment the graphic card is: GeForce GT 330M Edit 3: Using maps.google in webGL mode does not work anymore too, it was working yesterday. EDIT 4: the screenshot. btw: I think that blender is not working anymore too... EDIT: 5 I think that the problem is closely connected to this output

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >