Search Results

Search found 13534 results on 542 pages for 'gpu programming'.

Page 21/542 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Giant battery consumption with dual graphics solution (only i-gpu working)

    - by Noel
    I use a Laptop with Intel Core i7 SandyBridge and integrated Intel HD3000 graphics as well as a Nvidia Geforce GTS 555M. So far, I got the impression my Laptop was running with the Nvidia graphics adapter only because the fan was always running on highest speed (and loudest noise) and it was getting very hot even when doing nothing. Also the battery is empty after ~40-50 minutes (while having ~4-5 hours with Intel graphics in Win7). Since this can't be healthy I wanted to switch to the integrated graphics instead. I was fairly surprised when the System Information showed me that the as graphics adapter I use "Intel M". Why is my battery empty so fast with Ubuntu? Without using the NVIDIA graphics adapter? Summary: I DONT WANT to use the Nvidia graphics adapter (OPTIMUS), I just want the Intel solution. As I have understood, the Intel solution is running already, emptying my battery 10x as fast as Win7. What is wrong? Any ideas?

    Read the article

  • How do I determine whether bumblebee is working as expected?

    - by Christian Fazzini
    I followed the instructions at https://wiki.ubuntu.com/Bumblebee sudo add-apt-repository ppa:bumblebee/stable sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update Instead of installing the proprietary nvidia drivers, via: sudo apt-get install bumblebee bumblebee-nvidia linux-headers-generic I did: sudo apt-get install --no-install-recommends bumblebee linux-headers-generic How do I determine that power savings mode is active and that my dedicated GPU isn't running? One thing that bugs me is that if I go to System Settings - Details - Graphics. Driver is shown as Unknown.

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • Tutoriel OpenGL Moderne : indexation VBO, optimisez vos tampons GPU en OpenGL 3 et supérieur

    Bonjour à tous,La rubrique 2D/3D/Jeux est heureuse de vous présenter une la suite de la série de tutoriels consacrée à OpenGL moderne (les versions à partir d'OpenGL 3.3). Ces tutoriels vous permettront d'intégrer facilement les nouveaux concepts d'OpenGL afin de profiter au maximum des dernières technologies de vos cartes graphiques. Ce neuvième tutoriel vous apprendra à optimiser vos tampons en indexant les VBO.Bonne lecture.

    Read the article

  • GPU Temperature comparisons between similar GPUs of the same line

    - by White Phoenix
    As a general rule of thumb, if there are two similar GPUs from the same line (i.e. an NVIDIA GTX 260 vs a NVIDIA GTX 280), will the less powerful or more powerful GPU run hotter? This is assuming all other factors stay the same - hardware configuration, cooling setup, ingame settings, etc. Or does this depend entirely upon the GPU itself? I do remember reading that the GTX 280 in this example was terribly inefficient with power and cooling.

    Read the article

  • Why Are We Still Using CPUs Instead of GPUs?

    - by Jason Fitzpatrick
    Increasingly GPUs are being used for non-graphical tasks like risk computations, fluid dynamics calculations, and seismic analysis. What’s to stop us from adopting GPU-driven devices? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • How can I make video games if I don't like programming?

    - by hoper
    I am studying C++ code in my school (my major is computer programming). Honestly, my grades are not so good, and assignments are really hard. Sometimes I feel sad that I will spend 8-10 hours per day coding (which is stressful) in the future for my job. But I still want to make video games. Maybe this is the only reason why I am taking all of these stressful courses. I always write down plots, stories, characters, fictional gaming worlds... Once, I thought I should study artistic technology such as game design and not computer technology such as C++, C#, etc. However, most of popular game designers (or directors) such as Kojima, Miyamoto, etc. used to be good programmers. Companies actaully assign programmers to directors because they understand how to make a game. I've try to find other colleges or universities where they teach game design programs. However, one article that lists rank 10 game design schools in North America seems untrustful because the survey company only scores it from intervews of students. Once, I tried to attend Art Institute of Vancouver which is rank 7 according to that article. However, one programmer who used to be an instructor in there told me the truth: the employement rate of graduated students is low. How can I have a future making games if I don't like programming?

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Is it a good practice to create a list of definitions for all symbols and words in a programming language?

    - by MrDaniel
    After arriving at this point in Learning Python The Hard Way I am wondering if this is a good practice to create a list of symbols and define what they do as noted in bold below, for every programming language. This seems reasonable, and might be very useful to have when jumping between programming languages? Is this something that programmers do or is it just a waste of effort? Exercise 22: What Do You Know So Far? There won't be any code in this exercise or the next one, so there's no WYSS or Extra Credit either. In fact, this exercise is like one giant Extra Credit. I'm going to have you do a form of review what you have learned so far. First, go back through every exercise you have done so far and write down every word and symbol (another name for 'character') that you have used. Make sure your list of symbols is complete. Next to each word or symbol, write its name and what it does. If you can't find a name for a symbol in this book, then look for it online. If you do not know what a word or symbol does, then go read about it again and try using it in some code. You may run into a few things you just can't find out or know, so just keep those on the list and be ready to look them up when you find them. Once you have your list, spend a few days rewriting the list and double checking that it's correct. This may get boring but push through and really nail it down. Once you have memorized the list and what they do, then you should step it up by writing out tables of symbols, their names, and what they do from memory. When you hit some you can't recall from memory, go back and memorize them again.

    Read the article

  • Which programming languages have helped you to understand programming better?

    - by Xaisoft
    Which programming languages not only make you more proficient in the particular language your are learning, but also have a direct impact on the way you think and understand programming in general; therefore, making you a better programmer in other languages. Basically, which languages have the biggest impact on understanding the how and why of different programming concepts? What about Scheme? I have heard good things about that. I thought about taking the simplest of problems and implementing them in various languages. Has anyone done this?

    Read the article

  • Things you should implement in your own programming language

    - by I can't tell you my name.
    I've created an experimental toy programming language with a (now) working interpreter. It is turing-complete and has a pretty low-level instruction set. Even if everything takes four to six times more code and time than in PHP, Python or Ruby I still love programming all kinds of things in it. So I got the "basic" things that are written in many languages working: Hello World Input - Output Countdowns (not as easy as you think as there are no loops) Factorials Array emulation 99 Bottles of Beer (simple, wrong inflection) 99 Bottles of Beer (canonical) Conjatz conjecture Quine (that was a fun one!) Brainf*ck interpreter (To proof turing-completeness, made me happy) So I implemented all of the above examples because: They all used many different aspects of the language They are pretty interesting They don't take hours to write Now my problem is: I've run out of ideas! I don't find any more examples of what problems I could solve using my language. Do you have any programming problems which fit into some of the criteria above for me to work out?

    Read the article

  • Graphical Programming Language

    - by prosseek
    In control engineering or instrumentation, I see Simulink or LabVIEW(G) is pretty popular. In ESL design, I see that Agilent SystemVue is gaining some popularity. If you see the well established compiler theroy, almost 100% is about the textual language. But how about the graphical language? Is there any noticable research or discussion about the graphical programming language? In terms of Theory about Graphical Language - syntactic/semantic analysis and whatever relevant expressiveness (Actually, I asked a question about it at SO - http://stackoverflow.com/questions/2427496/what-do-you-mean-by-the-expressiveness-in-programming-lanuguage) Possibility of the Graphical language ... Or what do you think about the Graphical Programming Language?

    Read the article

  • Programming and art

    - by user353874
    Specialized software does play a major role in every business field. Games provide new realities and are proved child development tools. Communication got a new meaning. Information never traveled so fast. And programming is never referred to as an art form. Why is that? Programming is not romantic and not natural so we don't feel naturally attached to it. Basically, our emotions don't fit programming. But it's really cool and better than art. :D

    Read the article

  • Analysis and Design for Functional Programming

    - by edalorzo
    How do you deal with analysis and design phases when you plan to develop a system using a functional programming language like Haskell? My background is in imperative/object-oriented programming languages, and therefore, I am used to use case analysis and the use of UML to document the design of program. But the thing is that UML is inherently related to the object-oriented way of doing software. And I am intrigued about what would be the best way to develop documentation and define software designs for a system that is going to be developed using functional programming. Would you still use use case analysis or perhaps structured analysis and design instead? How do software architects define the high-level design of the system so that developers follow it? What do you show to you clients or to new developers when you are supposed to present a design of the solution? How do you document a picture of the whole thing without having first to write it all? Is there anything comparable to UML in the functional world?

    Read the article

  • Does a persons' first programming language affect their programming style and if so, how? [closed]

    - by Scott Walsh
    I was speaking to an experienced lecturer recently who told me he could usually tell which programming language a student had learnt to program in by looking at their coding style (more specifically, when programming in other languages to the one which they were most comfortable with). He said that there have been multiple times when he's witnessed students attempted to write C# in Prolog. So I began to wonder, what specific traits do people gain from their first (or favourite) language which are carried over into their overall programming style, and more interestingly what good or bad habits do you think people would benefit from or should be wary of when learning specific language?

    Read the article

  • How to learn web-programming (Javascript, PHP)?

    - by metal-gear-solid
    If I'm going to learn programming first time, How i should start? I don't know programming yet but I'm good at XHTML and CSS. my main aim is to learn first Javascript than second PHP. after having good command in Javascript I'll move to PHP. all these i want to learn to get good command in all areas of Wordpress design and Development. Although i can use basic javascript, jquery, PHP scripts in my projects but know i want to learn programming concept and want to get good knowledge.

    Read the article

  • I don't get object-oriented programming

    - by Joel J. Adamson
    Note: this question is an edited excerpt from a blog posting I wrote a few months ago. After placing a link to the blog in a comment on Programmers.SE someone requested that I post a question here so that they could answer it. This posting is my most popular, as people seem to type "I don't get object-oriented programming" into Google a lot. Feel free to answer here, or in a comment at Wordpress. What is object-oriented programming? No one has given me a satisfactory answer. I feel like you will not get a good definition from someone who goes around saying “object” and “object-oriented” with his nose in the air. Nor will you get a good definition from someone who has done nothing but object-oriented programming. No one who understands both procedural and object-oriented programming has ever given me a consistent idea of what an object-oriented program actually does. Can someone please give me their ideas of the advantages of object-oriented programming?

    Read the article

  • I don't get object-oriented programming

    - by Joel J. Adamson
    Note: this question is an edited excerpt from a blog posting I wrote a few months ago. After placing a link to the blog in a comment on Programmers.SE someone requested that I post a question here so that they could answer it. This posting is my most popular, as people seem to type "I don't get object-oriented programming" into Google a lot. Feel free to answer here, or in a comment at Wordpress. What is object-oriented programming? No one has given me a satisfactory answer. I feel like you will not get a good definition from someone who goes around saying “object” and “object-oriented” with his nose in the air. Nor will you get a good definition from someone who has done nothing but object-oriented programming. No one who understands both procedural and object-oriented programming has ever given me a consistent idea of what an object-oriented program actually does. Can someone please give me their ideas of the advantages of object-oriented programming?

    Read the article

  • Dependently typed language best suited to “real world” programming?

    - by Kim
    Which dependently typed programming languages could be used for real world application development? I will mostly be writing toy applications at first, after that maybe web development or a simple DBMS. These are some points, that I think are important: documentation example programs a good/big standard library an easy to use foreign function interface a community of people using the language for real world tasks tool support

    Read the article

  • What are some great papers/publications relating to game programming?

    - by Archagon
    What are some of your favorite papers and publications that closely relate to game programming? I'm particularly looking for examples that are well-written and illustrated, and/or have had a profound influence on the industry. (Here's one example: in this GDC talk, Bungie's David Aldridge mentions that a paper called "The TRIBES Engine Networking Model" was the starting point for Halo's network code.)

    Read the article

  • AMD Fusion GPU passthrough to KVM or Xen

    - by BigChief
    Has anyone successfully gotten a passthrough working with the GPU portion of AMD's Fusion APUs (the E-350 is my target) on top of a Linux hypervisor? IE, I want to dedicate the GPU to one VM only, excluding all other VMs as well as the host. I know PCI passthrough can work with patches / kernel rebuilds for Xen and KVM. However, since the GPU is on the same chip, I don't know if the host OS will see it as PCI. I know there are a number of tangential issues here, such as: Poor Fusion drivers in Linux at the moment Unsuccessful patching efforts seem common VT-d / IOMMU is required and (from my reading) is supported on the APU, but the motherboard may not offer it KVM doesn't appear to support primary graphics cards, only secondary graphics cards (described here) However, I'd like to hear from anyone who has messed with this, even failed attempts. Fedora + KVM is my preferred virtualization platform but I'm willing to change that if it makes a difference. EDIT: The goal is to do this for a Windows 7 guest (I know it's asking a lot). Regardless, just assume this is HVM, not PV.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >