Search Results

Search found 2838 results on 114 pages for 'graphic effects'.

Page 27/114 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • HTG Projects: Create a Pop Art Sci-Fi Poster with an Inkjet Printer

    - by Eric Z Goodnight
    Looking to decorate your house with some cool artwork? Grab some of your favorite Sci-Fi pics and some surprisingly simple tools, and create a Pop Art style poster in minutes. Through a simple process called “posterization,” you can reduce any graphic into a cool limited graphic with a similar look that Andy Warhol would have used when he created his famous Marylin Monroe image in the sixties. Pick a theme, grab some images, and get ready to decorate your home with a surprisingly easy and surprisingly cool poster any inkjet printer can produce Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Need to combine a color, mask, and sprite layer in a shader

    - by Donutz
    My task: to display a sprite using different team colors. I have a sprte graphic, part of which has to be displayed as a team color. The color isn't 'flat', i.e. it shades from brighter to darker. I can't "pre-build" the graphics because there are just too many, so I have to generate them at runtime. I've decided to use a shader, and supply it with a texture consisting of the team color, a texture consisting of a mask (black=no color, white=full color, gray=progressively dimmed color), and the sprite grapic, with the areas where the team color shows being transparent. So here's my shader code: // Effect attempts to merge a color layer, a mask layer, and a sprite layer // to produce a complete sprite sampler UnitSampler : register(s0); // the unit sampler MaskSampler : register(s1); // the mask sampler ColorSampler : register(s2); // the color float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex1 = tex2D(ColorSampler, texCoord); // get the color float4 tex2 = tex2D(MaskSampler, texCoord); // get the mask float4 tex3 = tex2D(UnitSampler,texCoord); // get the unit float4 tex4 = tex1 * tex2.r * tex3; // color * mask * unit return tex4; } My problem is the calculations involving tex1 through tex4. I don't really understand how the manipulations work, so I'm just thrashing around, producing lots of different incorrect effects. So given tex1 through tex3, what calcs do I do in order to take the color (tex1), mask it (tex2), and apply the result to the unit if it's not zero? And would I be better off to make the mask just on/off (white/black) and put the color shading in the unit graphic?

    Read the article

  • Launcher icon size and window behavior broken

    - by philipp
    I have installed the nvidia driver for my graphic card, just following some tutorials what works fine now. After this I could set the Icon size of the launcher, windows had a nice litte shadow, resolution was better and the windows showed up a nice effect when popping up an or when bringing to full-screen... But today the this was just gone after reboot. What could this be? Nvidia xserver-settings are availible. I installed and reinstalled wine1.5 via the apt-get commands, so this might broke something. What can do to fix this again? Greetings philipp EDIT: I went on searching and all i found was that this problem might be connected to the mode of unit, so there is 2d and 3d, but could also be something else, just because setting the mode brings no change. EDIT 2: the version of Ubuntu is: 12.04 and it is a 64 bit environment the graphic card is: GeForce GT 330M Edit 3: Using maps.google in webGL mode does not work anymore too, it was working yesterday. EDIT 4: the screenshot. btw: I think that blender is not working anymore too... EDIT: 5 I think that the problem is closely connected to this output

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • App to make a video from photos?

    - by chtfn
    I was wondering what could be a good app (with a GUI ;) ) for making a video from a bunch of photos or images, to create a time-lapse video, or a stop-motion animation, or even a video like this one. The idea is to set a really small time between each photo, but also to be able to change this time every so often, or add some effects, to make the succession smoother in particular. What would be perfect is a function that allows automatic cropping of the photos as well as exposure adjustment so they all have the same background and so the time-lapse video looks smoother. I know about the slide show app "Imagination" but the interval can not go under one second. Cheers! Edit: here is my progress, also thanks to the first answer: I tried Luciole, it is really simple and promessing, but pretty buggy, and I only could export an average video in .dv format (mpg2 and avi don't work). Apparently, it has difficulties when changing the fps. I also tried StopMotion: also pretty buggy, I had to go into preferences and modify the encoding commands to get a result, but it's the best result I got so far. But none of those has effects to make transitions smoother... I tried several diaporama apps: Imagination doesn't handle more than 1 image per second; PhotoFilmStrip (repo version and latest version from website): same problem even though you can go down to 0.1 second per image, it still behaves wierdly (going back to 1 second automatically); Videoporama: doesn't start at all on 12.04. Any other idea, folks?

    Read the article

  • Commercial product using a GPL OS

    - by pfried
    we are planning to create a commercial product. The product consists of come MCUs and a small computer (we are developping on a raspberry pi at the moment). The computer needs an operating system as we would like keep things like WLAN and booting as simple as possible. We create some software running on this computer (node.js application). The most operating systems like Arch Linux are licenced under the GPL. The product we would sell contains the computer with preinstalled OS and software. This system operates as a central access point to MCU devices and is able to control them. We use other's software in our product. We do not modify their source code. The product (the computer part) consists of a computer, an OS and software we create. How does the use of an OS affect our own code (licence)? Is there a possibility of avoiding GPL for our own code? eg. shipping the software seperated? Are there any effects to other components of our product, eg. the MCU part? The node.js application delivers a WebApp to the client where it is executed. Are there any effects (As we would like to sell parts of the code as an additional App on the App Stores)? I know we make use of the work of the community and i respect this. The problem is: The software alone is kind of useless without the MCU devices. I do not expect a legal advice.

    Read the article

  • Dell XPS 15 L502x and Ubuntu 11.04 - HDMI output

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

  • Compiz slow under proprietary nvidia driver

    - by gsedej
    Hi! I am using Ubuntu 10.10 and have problem with proprietary nvidia driver for my GeForce GTS 250. I have issue with poor Compiz performance. And there is also open-source "noueau" driver. Proprietary: I tried many versions but neither works fast on desktop. This means 30 FPS without heavy effects. Currently I am using version 270.18. Even normal desktop use feels bad (moving windows) In games (and 3D benchmark) it is really good! (Unigine Heaven works good!) Open-source "nouveau": Very fast on desktop with heavy effects (blur, ...). I have 300 FPS and more, even in Expo mode. Games were good but not as good as prop. And driver causes xorg to crash even the latest (ppa:xorg-edgers/nouveau), so I switched back to proprietary. I also have computer with Ubuntu 10.04, GeForce 8600GT and drivers around 185.x and Compiz works great there. There is similar question Nvidia proprietary driver performance in 10.10 Which version of nvidia (prop) driver is fast in Compiz in Ubuntu 10.10? How do you install a specific version of nvidia driver? Is it the case that each newer driver works slower on compiz?

    Read the article

  • At the time of installing ubuntu, i am getting dark black screen only

    - by faruque
    I am trying to install Ubuntu 12.04 LTS dual boot with Windows 7, but when i click on Try or even Install ubuntu, i am getting black screen only. I can't see any text or anything else. When i see my Laptop's screen from close look, ubuntu in the middle of screen shown but screen is dark black. So because of this i am unable to install Ubuntu on my laptop. Please help in this regard. Following deatails of my laptop. Details of my Laptop: Manufacturer- Acer Aspire 4736 Processor- Intel core 2 duo CPU T660 Graphics driver- Mobile Intel(R) 4 series express chipset family (Microsoft corporation - WDDM 1.1), Current version installed- 8.15.10.2302 In ubuntu 11.04 i know how to boot into nomodeset, but i don't know how to boot through nomodeset in ubuntu 12.04 LTS. Because there is no option shown for F6 key. My laptop is Acer aspire 4736, and my Video/Graphics card shows unknown by ubuntu. Please someone help me. Can changing or upgrading my laptop's graphic card solve this problem..?? If yes then, which graphic card should i go for which is supported by Ubuntu and other Linux distros? Please someone help.

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • Unity 3D does not work on Dell system with a AMD Radeon HD 6470M

    - by VeeKay
    I am running 64 bit Ubuntu on Dell with 1GB graphic card. I login with "Ubuntu" hoping to see Unity 3d but it doesn't. Unity 2D runs instead. when I type in echo "$DESKTOP_SESSION" it confirms the Unity-2D. I've checked the System info that shows like : The graphics row shows itself as empty. SO I've presumed that the graphic drivers aren't detected and hence I went to Unity- Additional Drivers and installed the fglrx driver that the UI has suggested. Even after installing so, the graphics part in System info details shows nothing and still Unity 2D runs in spite of all the effort. Please help! How can I get my Unity 3D back? Hardware Info Video Card : AMD Radeon™ HD 6470M - 1GB (For ICC) RAM : 6GB (1 X 2GB + 1 X 4GB) 2 DIMM DDR3 1333Mhz OS : 64 bit Ubuntu 11.10 Edit : Output for /usr/lib/nux/unity_support_test -p X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 21 Current serial number in output stream: 21

    Read the article

  • I am having trouble booting 12.04 keyboard and little man in circle appears and then nothing

    - by Rich J.
    I have a new system (ASUS 990FX mother board, 2 western dig drives, 32 gb memory, an ASUS nvidia video card (GeForce GTX560), and an ASUS DVD Burner (24B1ST)) I am struggling to get the 12.04 cd to work. I have been able to see the little man inside a circle and a graphic of a keyboard? with an == sign between them. Is this is a clue? What does it mean? After that the mauve screen with keyboard and little man in circle goes away. I get a dark background and underline cursor... even hear some work being done reading the rom.... but nothing appears on the screen. I haven't even got to the point where the graphic card is displaying improperly... it is just not displaying anything. If any kind soul has an idea about how to proceed, I am all ears. BTW, I have posted the issue to ASUS... waiting for reply.

    Read the article

  • Disable discrete AMD GPU

    - by Smajl
    My notebook has two graphic cards and it suffers from severe overheating after installing Ubuntu (no problem with Windows 7 on the same machine). I figured out that the problem may be in the graphic card and I would like to disable the discrete one. I followed some tutorials on this topic (for example http://planetoss.com/articles/how-to-disable-the-discrete-amd-graphics-card-in-linux/). But the problem is, that after executing the commands, nothing really happens and both GPU are still running. Here is what I have done: smajl@smajl-mini:~$ sudo chown smajl /sys/kernel/debug/vgaswitcheroo/switchsmajl@smajl-mini:~$ echo IGD > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 smajl@smajl-mini:~$ echo OFF > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 What am I missing here? Also, more on the overheating topic: 1) Installed TLP 2) updated system 3) set power setting mode to "power save" ...and nothing helps Tried same thing with Linux Mint without success. Is there anything else to try if I manage to disable the second GPU and the problem preserves? Otherwise I would have to get back to win in order not to melt my laptop.. :-/

    Read the article

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • Effect of using dedicated NVidia card instead of Intel HD4000

    - by Sman789
    Short version: Can someone please advise me of the effect of adding a dedicated NVIDIA GeForce GT 630M card to an Ubuntu laptop in terms of power consumption and performance gains/losses when doing general productivity tasks and booting up. Also, how good are the closed source, open source, and Bumblebee drivers for these newer cards compared to support for the Intel HD4000? Long version/Background, if any info here is helpful: I'm thinking of ordering a laptop from PC Specialist (a UK company who actually sell machines without Windows pre-installed) with the following specifications: Genesis IV: 15.6" AUO Matte 95% Gamut LED Widescreen (1920x1080) Intel® Core™i5 Dual Core Mobile Processor i5-3210M (2.50GHz) 3MB 4GB SAMSUNG 1600MHz SODIMM DDR3 MEMORY (1 x 4GB) 120GB INTEL® 520 SERIES SSD, SATA 6 Gb/s (upto 550MB/sR | 520MB/sW) Intel 2 Channel High Definition Audio + MIC/Headphone Jack GIGABIT LAN & WIRELESS INTEL® N135 802.11N (150Mbps) + BLUETOOTH Now, as I want this laptop mainly for work and not for games, I would be more than content with the HD4000 integrated chip which comes with the processor. However, for compatibility reasons, I am not able to get the specs I want unless I choose a NVIDIA GeForce GT 630M 1GB graphics card, which I don't have a great deal of use for. I'm willing to buy it, however, as it's still cheaper than any other laptop with the specs I want. However, I know that Linux power management isn't fantastic with open-source graphics drivers, and I don't much about Bumblebee. Basically, whilst I'm happy to 'tolerate' the card being there, I don't want to experience any negative effects on the rest of my system (battery, performance etc) and if there are likely to be any, I might reconsider my purchase. So if anyone can advise me on the effects, I would be very grateful, since I doubt I can just turn the card off. Thankyou for any assistance :)

    Read the article

  • Wrong resolution for Lightdm/GDM on Ubuntu 13.04 using HDMI

    - by f03lipe
    I've tried all the solution I could find on the matter so far, but the error persists. My problem is that the login screen (both under gdm and lightdm) runs with the wrong resolution, even though all is fine when I log in. The error occurs solely when I have my HDMI cable connected to my other screen. The login screen resolution becomes 1024x768 (for my 1366x768 laptop screen) and mirrored on my screen, which is 1920x1080. I've had this issue on version 12.04 (the last one before I upgraded to 13.04), but I got it fixed by adding the xrandr commands on the begining of the /etc/gdm/Init/Default file. This doesn't seem to work anymore. I've also tried telling lightdm to run a script fixing the resolution with xrandr (by editing /etc/lightdm/lightdm.conf), but lightdm crashes, and I'm forced to log in with low graphic settigs. Hint: when ubuntu is loading, the resolution starts OK, then goes bad right before the login screen is initialized. Does that mean that there's nothing wrong with my graphic cards? What do you think? Cheers!

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • Possible to draw a select portion of a render target? (in XNA)

    - by TheBroodian
    I'm going to try to do this in reverse fashion and skip straight to the punch line, and then give the back story afterward: Is it possible to, after drawing a scene to a RenderTarget2D, only draw a select portion of the RenderTarget2D, if I don't want the entire thing? I'm using xTile to manage world data in my game (it's a great piece of work, colinvella [xTile's author] has made an amazing product), and for the most part it works great. xTile supports parallax effects in its layers to add some wonderful depth to 2d scenes, which was great, until I implemented a dynamic split-screen system into my game. Wanted to make a co-op game that wouldn't require players to be in close proximity to each other, so I made it so that if the players separate too far apart, the singular full-screen viewport 'snaps-apart', and is replaced by two split-screen viewports, which then smoothly transition to their respective player targets. The effect is pretty smooth aside from the part where the parallax backgrounds become skewed once the viewports split, because xTile's ratio for handling parallax effects is dependent upon viewport size. This is unfortunate, because the effect would otherwise be really snazzy, but the backgrounds become pretty heavily affected when the game goes from single-viewport to multi-viewport. So, Colinvella suggests using rendertargets to record the scene at full viewport size, and then only drawing a portion of it. But as far as I can tell, that isn't even possible? That being said, I've never even used render targets before, so I'm still learning, hence the question here.

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • Impact Earth Lets You Simulate Asteroid Impacts

    - by Jason Fitzpatrick
    If you’re looking for a little morbid simulation to cap off your Friday afternoon, this interactive asteroid impact simulator makes it easy to the results of asteroid impacts big and small. The simulator is the result of a collaboration between Purdue University and the Imperial College of London. You can adjust the size, density, impact angle, and impact velocity of the asteroid as well as change the target from water to land. The only feature missing is the ability to select a specific location as the point of impact (if you want to know what a direct strike to Paris would yield, for example, you’ll have to do your own layering). Once you plug all that information in, you’re treated to a little 3D animation as the simulator crunches the numbers. After it finishes you’ll see a breakdown of a variety of effects including the size of the crater, the energy of the impact, seismic effects, and more. Hit up the link below to take it for a spin. Impact Earth [via Boing Boing] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • radeon display driver clones monitors while using Xinerama

    - by gregmuellegger
    I'm trying to get my two Radeon HD 4770 cards working with three monitors. Xinerama works so far in the way that I have two fully working monitors were I can move windows from one to the other. My problem now is that my third monitor is a clone of my second monitor (displaying the exact same thing). These monitors are connected to the same graphic card ("Screen Middle" and "Screen Right" in the xorg.conf below). Here is my xorg.conf: Section "ServerLayout" Identifier "ThreeMonitors" Screen "Screen Left" 0 0 Screen "Screen Middle" RightOf "Screen Left" Screen "Screen Right" RightOf "Screen Middle" Option "Xinerama" EndSection Section "Monitor" Identifier "Monitor Left" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Middle" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Right" Option "DPMS" EndSection Section "Device" Identifier "Device Left" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:3:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Left" Device "Device Left" Monitor "Monitor Left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Middle" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Middle" Device "Device Middle" Monitor "Monitor Middle" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Right" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:1" Screen 1 EndSection Section "Screen" Identifier "Screen Right" Device "Device Right" Monitor "Monitor Right" SubSection "Display" Depth 24 EndSubSection EndSection I'm using a fresh Kubuntu 10.10 installation with propsed-updates enabled since this repo contains a xorg fix for using multiple graphic cards. I hope someone can help me out. Very many thanks!!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >