Search Results

Search found 13534 results on 542 pages for 'gpu programming'.

Page 4/542 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

  • How to stop switchable graphics from switching to high-power GPU when charging the laptop?

    - by Saifallah
    I've an Acer laptop with Windows 7 64 bit and an ATI Radeon HD 6550M Graphics card. Whenever I connect the power to charge the battery it automatically switches to the high-power GPU (ATI) instead of the low-power (Intel) GPU. There's an option in the bios to stop such thing but it makes the GPU runs always on high-power and I can't switch to the low-power GPU. How can I prevent the switchable graphics from automatically using the high-power GPU when charging?

    Read the article

  • Playing HD video accelerated by GPU? (Linux)

    - by data_jepp
    I know that this laptop (Asus UL30A) does support playing HD material with the help of the GPU on windows. The question is how can I do this under linux? The CPU load as is implies that this is not activated. I assume that it's limited to a handful of codecs. But which player supports this, if its even supported at all?

    Read the article

  • System won't boot: Gigabyte HD 7790 1GB OC GPU issue or Corsair VS550 PSU issue?

    - by MGOwen
    Installed a new GPU, and PC won't boot. Turn it on and: No monitor signal at all (tried HDMI and VGA via DVI, on 2 working monitors). CPU and GPU fans DO spin, but No system beeps, no sounds from drives (they might make a small noise in the first 1 second or so, but there's definitely no OS loading or anything like that) If hit "power off" button it turns off immediately (no holding down for 3 seconds like usual) If I put my old HD 5670 GPU back in, everything works fine. But (plot twist!) card is not totally dead. My friend put it in his PC, and it works fine (he even played a game for 15 minutes, no issues). He has a Corsair TX850 850W and a Gigabyte MB. So my main theory is: the GPU isn't getting enough power from the PSU. But is it: Bad PSU? Seems unlikely, since it works fine with the other GPU. Also, the PSU Is brand new and 550W (single 42A/504W 12V rail). Overkill for this GPU. Corsair is a decent brand, but maybe just mine is faulty? Bad GPU? Could it be drawing more power than it should be, somehow, or something? Supposedly HD 7790 needs only 21A/75W on the 12v rail, though this one is factory overclocked a bit... but should that triple the power requirement? Something else? Could there be a motherboard incompatibility somehow? Both MB and GPU are less than a year old and PCI Express 3.0 x16. Things I've tried: Re-seating the video card Testing PC with old GPU (works fine, same PCIe slot). Checked AMD's stated amp/watt requirements of a 7790 and my PSU (see above). My PSU can output twice the amps (single rail) and 5x the Wattage a 7790 needs. Here are the full specs: Gigabyte HD 7790 1GB OC GPU Corsair VS550 550W PSU 4GB RAM AsRock H61M U3S3 motherboard i3-2100 500GB SATA HDD (2007-ish) blu-ray drive (new) PCI 802.11g card Edit: Motherboard BIOS Update seems to have fixed it. (If anyone has same problem and it doesn't work, comment here).

    Read the article

  • GPU Debugging with VS 11

    - by Daniel Moth
    With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code. First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV. Summary GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously! Switching debugger engines With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow. This one is showing the CPU option selected, which is basically the default that you are all familiar with: This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case): You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there) Breakpoint behavior Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case) …and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit) Give C++ AMP debugging a try So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN. Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off. Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Composing programs from small simple pieces: OOP vs Functional Programming

    - by Jay Godse
    I started programming when imperative programming languages such as C were virtually the only game in town for paid gigs. I'm not a computer scientist by training so I was only exposed to Assembler and Pascal in school, and not Lisp or Prolog. Over the 1990s, Object-Oriented Programming (OOP) became more popular because one of the marketing memes for OOP was that complex programs could be composed of loosely coupled but well-defined, well-tested, cohesive, and reusable classes and objects. And in many cases that is quite true. Once I learned object-oriented programming my C programs became better because I structured them more like classes and objects. In the last few years (2008-2014) I have programmed in Ruby, an OOP language. However, Ruby has many functional programming (FP) features such as lambdas and procs, which enable a different style of programming using recursion, currying, lazy evaluation and the like. (Through ignorance I am at a loss to explain why these techniques are so great). Very recently, I have written code to use methods from the Ruby Enumerable library, such as map(), reduce(), and select(). Apparently this is a functional style of programming. I have found that using these methods significantly reduce code volume, and make my code easier to debug. Upon reading more about FP, one of the marketing claims made by advocates is that FP enables developers to compose programs out of small well-defined, well-tested, and reusable functions, which leads to less buggy code, and low code volume. QUESTIONS: Is the composition of complex program by using FP techniques contradictory to or complementary to composition of a complex program by using OOP techniques? In which situations is OOP more effective, and when is FP more effective? Is it possible to use both techniques in the same complex program? Do the techniques overlap or contradict each other?

    Read the article

  • Why aren't we programming on the GPU???

    - by Chris
    So I finally took the time to learn CUDA and get it installed and configured on my computer and I have to say, I'm quite impressed! Here's how it does rendering the Mandelbrot set at 1280 x 678 pixels on my home PC with a Q6600 and a GeForce 8800GTS (max of 1000 iterations): Maxing out all 4 CPU cores with OpenMP: 2.23 fps Running the same algorithm on my GPU: 104.7 fps And here's how fast I got it to render the whole set at 8192 x 8192 with a max of 1000 iterations: Serial implemetation on my home PC: 81.2 seconds All 4 CPU cores on my home PC (OpenMP): 24.5 seconds 32 processors on my school's super computer (MPI with master-worker): 1.92 seconds My home GPU (CUDA): 0.310 seconds 4 GPUs on my school's super computer (CUDA with static domain decomposition): 0.0547 seconds So here's my question - if we can get such huge speedups by programming the GPU instead of the CPU, why is nobody doing it??? I can think of so many things we could speed up like this, and yet I don't know of many commercial apps that are actually doing it. Also, what kinds of other speedups have you seen by offloading your computations to the GPU?

    Read the article

  • Snapshot/Save GPU Drivers

    - by ashes999
    Since I'm running XP/32-bit, my GPU drivers are quite fragile. I've spent several hours trying to back up and restore from old versions, on at least two separate occasions. Writing down the device drivers is not enough. In the short term, I would like to somehow save, zip, backup, snapshot, or something so that if I need to reinstall my OS in the short-term, I have a reliable way to get the drivers. ATI's website doesn't have the install kit anymore, and I don't have it saved; I googled, but didn't find the exact same version. How can I backup/save my drivers so that I can reinstall them later?

    Read the article

  • New book in the style of Advanced Programming Language Design by R. A. Finkel [closed]

    - by mfellner
    I am currently researching visual programming language design for a university paper and came across Advanced Programming Language Design by Raphael A. Finkel from 1996. Other, older discussions in the same vein on Stackoverflow have mentioned Language Implementation Patterns by Terence Parr and Programming Language Pragmatics* by Michael L. Scott. I was wondering if there is even more (and especially up-to-date) literature on the general topic of programming language design. *) http://www.cs.rochester.edu/~scott/pragmatics/

    Read the article

  • Game programming course materials: What should it include?

    - by Esa
    I am tasked to create the course materials for a game programming class, and I’d like your opinion on what aspects and areas of game programming, such as game state management, game object storing or simple AI, should I include in it? The course is intented to be the first step into game programming for students with novice skills in programming. There will be mathematics as well, but I found that there are multiple questions, with good answers, on that subject already.

    Read the article

  • Dual monitor in Ubuntu 12.04 + Nvidia GPU with CUDA drivers?

    - by Aphex
    I'm thinking about setting up dual monitors on my desktop PC, running Ubuntu 12.04 with a Nvidia GTX 570. I use the GPU for CUDA programming so it's set up with the CUDA drivers. Is it possible/easy to set up dual monitors with this configuration? The only questions I saw related to this were for GPUs without cuda drivers. If anyone knows how dual monitors work with CUDA drivers, it would be much appreciated. It was enough of a pain getting everything running with the GPU and CUDA in the first place, I'd hate to ruin it all by attempting dual monitors.

    Read the article

  • Is there a simple, safe way to trigger a GPU lockup on a susceptible computer?

    - by Abe
    Answers to my previous question, Ubuntu 12.04 froze, requiring powercycle. What should I look / grep for in the logs?, have led me to suspect that my computer is experiencing an intermittent GPU lockup. It has been happening about once a week, usually when I am using Chrome. Today it happened when I was creating a diagram on lucidchart I have a Dell Optiplex 755 with an ATI Radeon HD 2400 XT and dual monitors running in Xinerama mode. I am using 12.04 with the proprietary ATI driver installed. When the computer locks-up, I can still ssh in. And I would like to follow the instructions on reporting this provided at https://wiki.ubuntu.com/X/Troubleshooting/Freeze Is there a (safe) way to cause a GPU lockup so that I can go ahead and file a bug, rather than waiting until it happens again?

    Read the article

  • Code Monster Helps Introduce Kids (and Curious Adults) to the Basics of Programming

    - by Jason Fitzpatrick
    If you’re looking for a fun way to introduce a kid to programming (or sate your own curiosity), Crunchzilla’s Code Monster is a real-time introduction to basic programming concepts. How does Code Monster work? Users are guided through the programming experience (using JavaScript) by a talkative blue monster that asks questions about the code and suggests courses of action. Play long enough and you travel from simple variables to more complex ideas like conditionals, expressions, and more. It’s not a comprehensive programming curriculum (nor does it claim to be) but it’s a great way to introduce people of all ages to programming. Hit up the link below to take it for a spin. Code Monster [via O'Reilly Radar] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • From Imperative to Functional Programming

    - by user66569
    As an Electronic Engineer, my programming experience started with Assembly and continue with PL/M, C, C++, Delphi, Java, C# among others (imperative programming is in my blood). I'm interested in add to my previous knowledge, skills about functional programming, but all I've seen until now seems very obfuscated and esoteric. Can you please answer me these questions? 1) What is the mainstream functional programming language today (I don't want to get lost myself studying a plethora of FP languages, just because language X has the feature Y)? 2) What was the first FP language (the Fortran of functional programming if you want)? 3) Finally, when talking about pure vs. non pure FP what are the mainstream languages of each category? Thank you in advance

    Read the article

  • The most mind-bending programming language?

    - by Xepoch
    From a reasonably common programming language, which do you find to be the most mind-bending? I have been listening to a lot of programming podcasts and taking some time to learn some new languages that are being considered upcoming, and important. I'm not necessarily talking about BrainFuck, but which language would you consider to be one that challenges the common programming paradigms? For me, I did some functional and logic (ex. Prolog) programming in the 90s, so can't say that I find anything special there. I am far from being an expert in it, but even today the most mind-bending programming language for me is Perl. Not because "Hello World" is hard to implement but rather there is so much lexical flexibility that some of the hardest solutions can be decomposed so poetically that I have to walk outside away from my terminal to clear my head. I'm not saying I'd likely sell a commercial software implementation, just that there is a distinct reason Perl is so (in)famous. Just look at the basic list of books on it. So, what is your mind-bending language that promotes your better programming and practices?

    Read the article

  • The most mind-bending programming language? [closed]

    - by Xepoch
    From a reasonably common programming language, which do you find to be the most mind-bending? I have been listening to a lot of programming podcasts and taking some time to learn some new languages that are being considered upcoming, and important. I'm not necessarily talking about BrainFuck, but which language would you consider to be one that challenges the common programming paradigms? For me, I did some functional and logic (ex. Prolog) programming in the 90s, so can't say that I find anything special there. I am far from being an expert in it, but even today the most mind-bending programming language for me is Perl. Not because "Hello World" is hard to implement but rather there is so much lexical flexibility that some of the hardest solutions can be decomposed so poetically that I have to walk outside away from my terminal to clear my head. I'm not saying I'd likely sell a commercial software implementation, just that there is a distinct reason Perl is so (in)famous. Just look at the basic list of books on it. So, what is your mind-bending language that promotes your better programming and practices?

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • What are some good practices when trying to teach declarative programming to imperative programmers?

    - by ChaosPandion
    I offered to do a little bit training in F# at my company and they seemed to show some interest. They are generally VB6 and C# programmers who don't follow programming with too much passion. That being said I feel like it is easier to write correct code when you think in a functional matter so they should definitely get some benefit out of it. Can anyone offer up some advice on how I should approach this? Ideas Don't focus on the syntax, instead focus on how this language and the idioms it promotes can be used. Try and think of examples that are a pain to write in an imperative fashion but translates to elegant code when written in a declarative fashion.

    Read the article

  • Help migrating from VB style programming to OO programming [closed]

    - by Agent47DarkSoul
    Being a hobbyist Java developer, I quickly took on with OO programming and understood its advantages over procedural code from C, that I did in college. But I couldn't grasp VB event based code (weird, right?). Bottom-line is OOP came natural to me. Curently I work in a small development firm developing C# applications. My peers here are a bit attached to VB style programming. Most of the C# code written is VB6 event handling code in C#'s skin. I tried explaining to them OOP with its advantages but it wasn't clear to them, maybe because I have never been much of a VB programmer. So can anybody provide any resources: books, web articles on how to migrate from VB style to OO style programming ?

    Read the article

  • Is there a name for this functional programming construct/pattern?

    - by dietbuddha
    I wrote a function and I'd like to find out if it is an implementation of some functional programming pattern or construct. I'd like to find out the name of this pattern or construct (if it exists)? I have a function which takes a list of functions and does this to them: wrap(fn1, fn2, fn3, fn4) # returns partial(fn4, partial(fn3, partial(fn2, fn1))) There are strong similarities to compose, reduce, and other fp metaprogramming constructs, since the functions are being arranged together and returned as one function. It also has strong similarities to decorators and Python context managers since it provides a way to encapsulate pre and post execution behaviors in one function. Which was the impetus for writing this function. I wanted the ability that context managers provide, but I wanted to be able to have it defined in one function, and to be able to layer function after function on top.

    Read the article

  • Have you ever bought a commercial implementation of a programming language for personal programming

    - by Nelson
    Commercial products are often a source of ideas and inspiration for open source projects. There are free and open source implementations of almost every programming language ever devised, and a lot of them are very good. For non-work related personal programming projects, have you ever bought an expensive commerical implementation of a programming language and found it well worth the investment? If so, which one and why?

    Read the article

  • GPU Computing - # of GPUs supported

    - by TehTypoKing
    I currently have a desktop with 6 GPUs ( 3x HD 5970s ) in non-crossfire mode. Unfortunately, it seems that Windows 7 64bit only supports up to 4 GPUs. I have not been able to find a reliable source to deny or confirm this. If windows 7 has this limitation, is there a Linux flavor that supports more than 4 GPUs? In-case you are wondering, this is not for gaming but high-speed single precision computing. With this current setup ( if I can find 6gpu support ) I am looking to reach 13.8 Teraflops. Also, my motherboard does support 3 16x pci-xpress gen2 slots... and I have a 1500w powersupply plugged into a 20amp outlet. Windows is able to detect all 6 cores.. although, 2 of which displays the warning "Drivers failed to load". To recap: - Can windows support 6 GPUs? - If not, does Linux? Thank you.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >