Search Results

Search found 18084 results on 724 pages for 'graphics programming'.

Page 32/724 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Introducing Programming To a Mathematician

    - by ell
    I currently am a programmer, I'm almost 16 years of age and have pretty much narrowed my careers down to something involving a Computer Science degree or Electrical Engineering degree (I know they are quite different but this question is about my friend) but my friend isn't so sure. He is very interested in maths and is very good at it and I think he would enjoy programming but he isn't willing to try it (edit he is willing to try but has never done before). Can anyone give me an suggestions for a language or tool that he could dabble in programming (at a reasonably basic level I assume) to solve maths problems or involve some kind of maths. As I say he enjoys maths a lot but I think he would enjoy programming, the problem is I don't want him to be put off by the stuff that isn't relevant at introductory levels such as memory allocation et al. I know that is very important but the point is that I want him to learn a bit of programming with maths then hopefully if he is interested enough he can start learning programming as programming. Thanks in advance, ell. Edit: Its not that he's completely uninterested - more that he hasn't actively explored the area before, maybe because he isn't informed about it. I wouldn't want to force him to do something he doesn't want to, I see this as more of a little push so that he can learn about programming. If he doesn't like it - fair enough, I can't control that and don't want to but if he turns out to enjoy it - this push will have been the right thing.

    Read the article

  • Programming in academic environment vs industry environment [closed]

    - by user200340
    Possible Duplicate: Differences between programming in school vs programming in industry? This is a general discussion about programming in the industry environment. The background story is that my colleague sent me a very interesting article called "10 Things Entrepreneurs Don’t Learn in College." The first point in that post is about the author's experience of programming in the academic environment vs industry environment. After finishing a 4 year Computer Science degree course, I am currently working in the academic environment as a developer, mainly writing Java, J2EE, Javascript code. I know there are differences between academic programming and industry programming, but I was shocked after reading that post. Trying to avoid this happening on me in the future, or the others. Can anyone from industry give some general advice about how to program in industry. For example, What exactly happens when a task is received? What is the flow from the beginning to the end? What are the main differences between the programming in industry and academia? Is it more structured? Are more frameworks used? It would be great if some code examples could be given. Thanks.

    Read the article

  • Mirroring a portion of the screen to an external display (in OSX)

    - by Adam
    I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor). MFDex http://trac2.assembla.com/lightnings..._x86_Setup.msi I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage. Any ideas or pointers would be much appreciated. I just need somewhere to start from. Regards,

    Read the article

  • Inspiring the method of teaching. Example- C++ :)

    - by Ashwin
    A year ago I graduated with a degree in Computer Science and Engineering. Considering C++ as the first choice of programming language I have been in the process of learning C++ in many ways. At first - five years back - I had many conceptions, most of which were so abstract to me. It started when I knew almost everything about Structs in C and nothing about Classes in C++. I went through a great time experimenting them all and learning a lot. I had a hard time evaluating Procedural programming vs Object-Oriented Programming. Deciding when to choose Procedural or Object-Oriented Programming took a great deal of patience for me. I knew that I cannot underestimate any of these Programming styles... Though Procedural programming is often a better choice than simple sequential unstructured programming, when solving problems with procedural programming, we usually divide one problem into several steps in order regarded as functions. Then we call these functions one by one to get the result of the problem. When solving problems with Object Oriented Priciples we divide one problem into several classes and form the interaction between them. Evaluating these two at the beginning (as a learner) required a lot of inspiration and thoughts. Instructing to think step by step. Relative concepts to understand deeply. Intensive interests to contrast both solving in both POP and OOP. If you were ever a mentor: What ideas/methods would you teach to students in which it will Inspire them to learn a programming language (in general, computer sciences)?

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • MyPaint is an Open-Source Graphics App for Digital Painters

    - by Asian Angel
    Are you looking for a terrific graphics app to use for original painting and artwork creation on your computer? Whether it is for you or the kids, MyPaint is an app that you should definitely have on hand for when those artistic moods come along. For our example we chose to install MyPaint on Ubuntu 10.10…you can easily find it in the Ubuntu Software Center by doing a quick search. Once you have it installed, all that is left to do is decide if you want to add additional brushes (link provided below) and then start having fun creating your next work of art. Here are some of MyPaint’s wonderful features: Exists for several platforms (Linux, Windows, and Mac OS X) Supports pressure sensitive graphics tablets Extensive brush creation and configuration options Unlimited canvas (you never have to resize) Basic layer support Comes with a large brush collection including charcoal and ink to emulate real media MyPaint is fun to use and can quickly become very addicting as you experiment during the creation process! Links MyPaint Homepage Download Additional Brushes for MyPaint Download the GIMP Plugin for the OpenRaster File Format Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Rendering UIImage/CGImage into CGPDFContext results in... blankness!

    - by quixoto
    Hi all, I'm trying to take an image that I have in a image object and render into a Core Graphics PDF context-- happens to be on an iPhone but this question surely applies equally to desktop Quartz. This UIImage is a simple color-on-white image at about 600x800 resolution. If I (say) turn it into a PNG file, that file looks exactly as expected-- so the data is OK. Here's what I'm doing to generate the PDF: NSMutableData * outputData = [[NSMutableData alloc] init]; CGDataConsumerRef dataConsumer = CGDataConsumerCreateWithCFData((CFMutableDataRef)outputData); CFMutableDictionaryRef attrDictionary = NULL; attrDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(attrDictionary, kCGPDFContextTitle, @"My Awesome Document"); CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, NULL, attrDictionary); CFRelease(dataConsumer); CFRelease(attrDictionary); CGImageRef pageImage = [myUIImage CGImage]; CGPDFContextBeginPage(pdfContext, NULL); CGContextDrawImage(pdfContext, CGRectMake(0, 0, [myUIImage size].width, [myUIImage size].height), pageImage); CGPDFContextEndPage(pdfContext); CGContextRelease(pdfContext); Resulting PDF, which ends up in outputData, seems like a valid PDF file (opens correctly, document title is present in metadata), but it consists of precisely one blank page. What am I doing wrong? Thanks.

    Read the article

  • How to systematically generate images from data?

    - by adamvickers
    I work for a performing arts nonprofit. We have seating charts for each of the theaters we work with; each seating chart shows the number of sections, the shape of each section, and the number of rows in each section. We'd like to create dynamic seating charts based on this info. We'd like them to look/feel kinda like this: http://www.fansnap.com/tickets/177754-on. But the tricky part is we'd like to be able to store all the info about each theater (the section names, shape/size of each section, and number of rows in each section) as data and then build a system that reads this data and uses it to create a dynamic map. I'm a life-long web developer, but I don't have have any experience with a difficult graphics problem like this. I realize it's a complex problem and I don't expect anyone to give me a complete answer here, but I would love direction on where I should be looking for more info. Is what I'm describing possible? Does this sort of technique have a name? Where can I learn more about how to accomplish this? What software should I use? Any info would be helpful.

    Read the article

  • Undocumented Secrets of MATLAB-Java Programming, un livre de Yair Altman, critique par Jérôme Briot

    Undocumented Secrets of MATLAB-Java Programming de Yair Altman D'après l'éditeur : Citation: For a variety of reasons, the MATLAB®-Java interface was never fully documented. This is really quite unfortunate: Java is one of the most widely used programming languages, having many times the number of programmers and programming resources as MATLAB. Also unfortunate is the popular claim that while MATLAB is a fine programming platform for prototyping...

    Read the article

  • Motherboard warning lights when plugging in a display port cord to graphics card?

    - by rllr
    Earlier today, my computer spontaneously shut itself off and refused to turn back on. I tested my PSU and it's operating fine. I unplugged everything and let it sit for a while and it started to make a high pitched coil whine/hiss. When I came back an hour later and plugged in only the power cord, it turned on without any issues. After some troubleshooting, I noticed my motherboard (Intel D975XBX2) has a red CPU led and VR led that come on whenever I plug my monitor into my graphics card via display port. DVI does not cause a similar issue. I was running three monitors off the card, so I need both DVI ports and the display port working. Is it likely my graphics card needs to be replaced, or should I be looking elsewhere to resolve this issue?

    Read the article

  • How can I ensure my Samsung Series 7 is actually using the Radeon switchable graphics?

    - by patricksweeney
    I have a Samsung Series 7 with the Radeon 6750M switchable graphics. The ATI software lets you force programs to use the dedicated card. However, I'm not convinced it is actually ever using it, as the frame rates on some non-taxing games (Portal, TF2) are merely OK. To make matters worse, it looks like the ATI Catalyst Control has vanished from my laptop. To make it even worse, you can't download the driver and ATI CCC from ATI's site, you need to download it from Samsung, and the ZIP they provide is corrupted. How can I ensure my Samsung Series 7 is actually using the Radeon switchable graphics?

    Read the article

  • Low end dedicated GPU vs. integrated Intel graphics (for light CAD work)

    - by PaulJ
    I have been asked to spec a PC for an interior design business. They are going to do some AutoCAD work (but they won't be using massive datasets or anything), and also use Kitchen Draw, a program that has 3D visualization features and says, in its requirements, that "a recent NVidia or ATI card might be enough". Since they are very limited budget-wise, I had originally picked a GeForce GT 610 card, but this card is so low end that I'm left wondering whether it will be an improvement at all over the dedicated Intel HD2500 graphics chip that comes with the CPU (I will be using an Ivy-Bridge Intel i5). Most of the information I see around is for gaming, which isn't really relevant in my case. Basically, for the use case I've described (light 3D work), can one get away with a current Intel HD graphics chipset? And will a low end GPU like the GT 610 provide a noticeable improvement?

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • Modern Game Programming

    - by Alon
    Hey. I'm a software and web developer for ~3 years, and I want to start learning 3D network game programming. What is the most modern & fastest way to write 3D PC games? What language? For graphics, should I use a graphics API like Direct3D/OpenGL or is there something less painful? What math/physics skills should I know before starting? Thank you.

    Read the article

  • How can I make video games if I don't like programming?

    - by hoper
    I am studying C++ code in my school (my major is computer programming). Honestly, my grades are not so good, and assignments are really hard. Sometimes I feel sad that I will spend 8-10 hours per day coding (which is stressful) in the future for my job. But I still want to make video games. Maybe this is the only reason why I am taking all of these stressful courses. I always write down plots, stories, characters, fictional gaming worlds... Once, I thought I should study artistic technology such as game design and not computer technology such as C++, C#, etc. However, most of popular game designers (or directors) such as Kojima, Miyamoto, etc. used to be good programmers. Companies actaully assign programmers to directors because they understand how to make a game. I've try to find other colleges or universities where they teach game design programs. However, one article that lists rank 10 game design schools in North America seems untrustful because the survey company only scores it from intervews of students. Once, I tried to attend Art Institute of Vancouver which is rank 7 according to that article. However, one programmer who used to be an instructor in there told me the truth: the employement rate of graduated students is low. How can I have a future making games if I don't like programming?

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Is it a good practice to create a list of definitions for all symbols and words in a programming language?

    - by MrDaniel
    After arriving at this point in Learning Python The Hard Way I am wondering if this is a good practice to create a list of symbols and define what they do as noted in bold below, for every programming language. This seems reasonable, and might be very useful to have when jumping between programming languages? Is this something that programmers do or is it just a waste of effort? Exercise 22: What Do You Know So Far? There won't be any code in this exercise or the next one, so there's no WYSS or Extra Credit either. In fact, this exercise is like one giant Extra Credit. I'm going to have you do a form of review what you have learned so far. First, go back through every exercise you have done so far and write down every word and symbol (another name for 'character') that you have used. Make sure your list of symbols is complete. Next to each word or symbol, write its name and what it does. If you can't find a name for a symbol in this book, then look for it online. If you do not know what a word or symbol does, then go read about it again and try using it in some code. You may run into a few things you just can't find out or know, so just keep those on the list and be ready to look them up when you find them. Once you have your list, spend a few days rewriting the list and double checking that it's correct. This may get boring but push through and really nail it down. Once you have memorized the list and what they do, then you should step it up by writing out tables of symbols, their names, and what they do from memory. When you hit some you can't recall from memory, go back and memorize them again.

    Read the article

  • Which programming languages have helped you to understand programming better?

    - by Xaisoft
    Which programming languages not only make you more proficient in the particular language your are learning, but also have a direct impact on the way you think and understand programming in general; therefore, making you a better programmer in other languages. Basically, which languages have the biggest impact on understanding the how and why of different programming concepts? What about Scheme? I have heard good things about that. I thought about taking the simplest of problems and implementing them in various languages. Has anyone done this?

    Read the article

  • Things you should implement in your own programming language

    - by I can't tell you my name.
    I've created an experimental toy programming language with a (now) working interpreter. It is turing-complete and has a pretty low-level instruction set. Even if everything takes four to six times more code and time than in PHP, Python or Ruby I still love programming all kinds of things in it. So I got the "basic" things that are written in many languages working: Hello World Input - Output Countdowns (not as easy as you think as there are no loops) Factorials Array emulation 99 Bottles of Beer (simple, wrong inflection) 99 Bottles of Beer (canonical) Conjatz conjecture Quine (that was a fun one!) Brainf*ck interpreter (To proof turing-completeness, made me happy) So I implemented all of the above examples because: They all used many different aspects of the language They are pretty interesting They don't take hours to write Now my problem is: I've run out of ideas! I don't find any more examples of what problems I could solve using my language. Do you have any programming problems which fit into some of the criteria above for me to work out?

    Read the article

  • Graphical Programming Language

    - by prosseek
    In control engineering or instrumentation, I see Simulink or LabVIEW(G) is pretty popular. In ESL design, I see that Agilent SystemVue is gaining some popularity. If you see the well established compiler theroy, almost 100% is about the textual language. But how about the graphical language? Is there any noticable research or discussion about the graphical programming language? In terms of Theory about Graphical Language - syntactic/semantic analysis and whatever relevant expressiveness (Actually, I asked a question about it at SO - http://stackoverflow.com/questions/2427496/what-do-you-mean-by-the-expressiveness-in-programming-lanuguage) Possibility of the Graphical language ... Or what do you think about the Graphical Programming Language?

    Read the article

  • Display with intel integrated graphics, bitcoin mine with Radeon 6950

    - by karategeek6
    I'm on Ubuntu Linux 11.04 64 bit. I have an intel i5 with integrated graphics and a Radeon 6950, with one monitor. I would like to run my graphics on the integrated card, and run bitcoin mining on the 6950. I have bitcoin mining working when I use the 6950 for both display and mining. Every time I try and and use the integrated graphics instead, OpenCL doesn't recognize my 6950. Using aticonfig --initial when using the integrated graphics for display breaks things. So I used the xorg.conf it created as a basis and tried to manually edit it. I really don't know what I'm doing, though. My last attempt is given below. The graphics ran off the integrated card, but the 6950 wasn't recognized. Any help would be greatly appreciated! xorg.conf: #Section "ServerLayout" # Identifier "Intel Layout" # Screen "Default Screen" # Identifier "aticonfig Layout" # Screen "aticonfig-Screen[0]-0" # Screen 0 "aticonfig-Screen[0]-0" 0 0 #EndSection Section "Module" Load "glx" EndSection # Intel Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Screen" Identifier "Default Screen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 EndSection # ATI Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Programming and art

    - by user353874
    Specialized software does play a major role in every business field. Games provide new realities and are proved child development tools. Communication got a new meaning. Information never traveled so fast. And programming is never referred to as an art form. Why is that? Programming is not romantic and not natural so we don't feel naturally attached to it. Basically, our emotions don't fit programming. But it's really cool and better than art. :D

    Read the article

  • Analysis and Design for Functional Programming

    - by edalorzo
    How do you deal with analysis and design phases when you plan to develop a system using a functional programming language like Haskell? My background is in imperative/object-oriented programming languages, and therefore, I am used to use case analysis and the use of UML to document the design of program. But the thing is that UML is inherently related to the object-oriented way of doing software. And I am intrigued about what would be the best way to develop documentation and define software designs for a system that is going to be developed using functional programming. Would you still use use case analysis or perhaps structured analysis and design instead? How do software architects define the high-level design of the system so that developers follow it? What do you show to you clients or to new developers when you are supposed to present a design of the solution? How do you document a picture of the whole thing without having first to write it all? Is there anything comparable to UML in the functional world?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >