Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 97/101 | < Previous Page | 93 94 95 96 97 98 99 100 101  | Next Page >

  • C file read leaves garbage characters

    - by KJ
    Hi. I'm trying to read the contents of a file into my program but I keep occasionally getting garbage characters at the end of the buffers. I haven't been using C a lot (rather I've been using C++) but I assume it has something to do with streams. I don't really know what to do though. I'm using MinGW. Here is the code (this gives me garbage at the end of the second read): include include char* filetobuf(char *file) { FILE *fptr; long length; char *buf; fptr = fopen(file, "r"); /* Open file for reading */ if (!fptr) /* Return NULL on failure */ return NULL; fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */ length = ftell(fptr); /* Find out how many bytes into the file we are */ buf = (char*)malloc(length+1); /* Allocate a buffer for the entire length of the file and a null terminator */ fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file */ fread(buf, length, 1, fptr); /* Read the contents of the file in to the buffer */ fclose(fptr); /* Close the file */ buf[length] = 0; /* Null terminator */ return buf; /* Return the buffer */ } int main() { char* vs; char* fs; vs = filetobuf("testshader.vs"); fs = filetobuf("testshader.fs"); printf("%s\n\n\n%s", vs, fs); free(vs); free(fs); return 0; } The filetobuf function is from this example http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_%28C_/_SDL%29. It seems right to me though. So anyway, what's up with that?

    Read the article

  • iPhone programming - problem with CoreFoundation forking, PLEASE for the love of god help! lol

    - by Tom
    Hello all, I've been working on an iPhone for several months. It's a 2d shooting game akin to the old Smash TV type games. I'm doing everything alone and it has come out well so far, but now I am getting unpredictable crashes which seem to be related to CoreFoundation forking and not exec()ing, as the message THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONA LITY_YOU_MUST_EXEC__ always shows up somewhere in the debugger. Usually it shows up around a CFRunLoopRunSpecific and is related to either a timer firing or _InitializeTouchTapCount. I cannot figure out exactly what is causing the fork to occur. My main game loop is running on a timer, first updating all the logic and then drawing everything with openGL. There is nothing highly complex or unusual. I understand you cannot make CF calls on the childside of a fork, or access shared memory and things like that. I am not explicitly trying to fork anything. My question is: can anyone tell me what type of activity might cause CoreFoundation to randomly fork like this? I'd really like to finish this game and I don't know how to solve this problem. Thanks for any help.

    Read the article

  • Differences between iPhone/iPod Simulator and Devices

    - by Allisone
    Hi, since I started iPhone/iPod Development I have come across some differences between how the simulator and how real device react. Maybe I will come across some other differences I will have to figure out as well, maybe other people haven't met these problems here (YET) and can profit from the knowledge, and maybe you know some problems/differences that you would have been happy to know about earlier before you spent several hours or days figuring out what the heck is going on. So here is what I came across. Simulator is not case sensitive, Devices are case sensitive. This means a default.png or Icon.png will work in simulator, but not on a device where they must be named Default.png and icon.png (if it's still not working read this answer) Simulator has different codecs to play audio and video If you use f.e. MPMoviePlayerController you might play certain video on the simulator while on the device it won't work (use Handbrake-presets-iPhone & iPod Touch to create playable videos for Simulator and Device). If you play audio with AudioServicesPlaySystemSound(&soundID) you might here the sound on simulator but not an a device. (use Audacity to open your soundfile, export as wav and run afconvert -f caff -d LEI16@44100 -c 1 audacity.wav output.caf in terminal) Also there is this flickering on second run problem which can be resolved with an playerViewCtrl.initialPlaybackTime = -1.0; either on the end of playing or before each beginning. Simulator is mostly much faster cause it doesn't simulate the hardware but uses Mac resources, therefore f.e. sio2 Apps (OpenGL,OpenAL,etc. framework) run much better on simulator, well everything that uses more resources will run visibly better in simulator than on device. I hope we can add some more to this.

    Read the article

  • iPhone programming - problem with CoreFoundation forking

    - by Tom
    Hello all, I've been working on an iPhone for several months. It's a 2d shooting game akin to the old Smash TV type games. I'm doing everything alone and it has come out well so far, but now I am getting unpredictable crashes which seem to be related to CoreFoundation forking and not exec()ing, as the message __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONA LITY___YOU_MUST_EXEC__ always shows up somewhere in the debugger. Usually it shows up around a CFRunLoopRunSpecific and is related to either a timer firing or _InitializeTouchTapCount. I cannot figure out exactly what is causing the fork to occur. My main game loop is running on a timer, first updating all the logic and then drawing everything with openGL. There is nothing highly complex or unusual. I understand you cannot make CF calls on the childside of a fork, or access shared memory and things like that. I am not explicitly trying to fork anything. My question is: can anyone tell me what type of activity might cause CoreFoundation to randomly fork like this? I'd really like to finish this game and I don't know how to solve this problem. Thanks for any help.

    Read the article

  • Are we DELPHI, VCL or Pascal programmers?

    - by José Eduardo
    i´ve been a delphi database programmer since D2. Now i´m facing some digital imaging and 3D challenges that make me to start study OpenGL, DirectX, Color Spaces and so on. I´m really trying but nobody seems to use Delphi for this kind of stuff, just the good-old-paycheck Database programming. ok, i know that we have some very smart guys behind some clever components, some of this open-source. Is there any PhotoShop, Blender, Maya, Office, Sonar, StarCraft, Call of Dutty written in Delphi? Do i have to learn C++ to have access to zillions of books about that kind of stuff? What is the fuzz/hype behind this: int *varName = &anhoterThing? Why pointers seems to be the holy graal to this apps? I´ve downloaded MSVC++ Express and start to learn some WPF and QT integration, and i think: "Man, Delphi does this kind of stuff, with less code, less headaches, since the wheels were invented" This lead my mind to the following... Do you ever tried to write a simple notepad program using just notepad and dcc32 in Pascal/Delphi? if so embarcadero could make our beloved pascal compiler free, and sell just the ide, the vcl, the customer support ... and back to the question: Are we DELPHI, VCL or Pascal programmers?

    Read the article

  • suggestions for a 3D graph rendering library?

    - by Sandro
    Hello coders! So I'm not sure how stackoverflow friendly this question is since it doesn't have a quick clear cut answer but here we go... I have a java program that generates data for a directed graph. Now I need to render this graph. The data needs to be laid out in 3D, and I want to be able to define which plane an edge lives in. (Each edge will only need to occupy 1 plane of the 3D space). I also need the ability to navigate around the graph. Since I know that this kind of stuff is hard, I'm going shopping. So far I've looked into (In no particular order): JUNG: lacks 3D support Cytoscape: not sure how much I'll be able to define edge drawing, haven't seen a non bio-informatics application of it yet JGraph: I didn't see any 3D applications yet Perfuse: looks promising, does anyone know anything else about it? Gephi: Documentation looks scarce Processing: does this play well with java? I'm also considering doing some combination of opengl + swing rendering to create a 3D graph from multiple 2D graphs. I am also not adverse to the idea of linking from another language Any Ideas? Thank you.

    Read the article

  • Python pixel manipulation library

    - by silinter
    So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast. My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem). I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program. My program will chiefly be dealing in loading images and writing/reading to/from surfaces.

    Read the article

  • Android Multiple Handlers Design Question

    - by Soumya Simanta
    This question is related to an existing question I asked. I though I'll ask a new question instead of replying back to the other question. Cannot "comment" on my previous question because of a word limit. Marc wrote - I've more than one Handlers in an Activity." Why? If you do not want a complicated handleMessage() method, then use post() (on Handler or View) to break the logic up into individual Runnables. Multiple Handlers makes me nervous. I'm new to Android. Is having multiple handlers in a single activity a bad design ? I'm new to Android. My question is - is having multiple handlers in a single activity a bad design ? Here is the sketch of my current implementation. I've a mapActivity that creates a data thread (a UDP socket that listens for data). My first handler is responsible for sending data from the data thread to the activity. On the map I've a bunch of "dynamic" markers that are refreshed frequently. Some of these markers are video markers i.e., if the user clicks a video marker, I add a ViewView that extends a android.opengl.GLSurfaceView to my map activity and display video on this new vide. I use my second handler to send information about the marker that the user tapped on ItemizedOverlay onTap(int index) method. The user can close the video view by tapping on the video view. I use my third handler for this. I would appreciate if people can tell me what's wrong with this approach and suggest better ways to implement this. Thanks.

    Read the article

  • getting a tiled image collection on the iPad (deepzoom)

    - by Chris B
    I have a set of tiled image collections created via microsoft's deep zoom composer, and a silverlight app that currently consumes them for display via MultiScaleImage - it's all working pretty well - I'd just like to get some experience with iPad programming and have a couple of ideas for some ipad applications. All my ideas rely on me being able to display/manipulate these tiled image sets (on the iPad). I just picked up a iMac to facilitate this. I'm not seeing any objective-c / cocoa-touch libraries for this though, so am assuming I will have to roll my own. (Saw the seadragon ajax component, which is pretty slick, but I'm dealing with collections here, which it doesn't support. I would also like to roll this as a native app just to get the experience). The only open source project I found for displaying/manipulating the tiled image sets was Openzoom -a flash component. I'm not to familiar with actionscript either (python, java, c#, and c are the only languages I have really used), but briefly inspecting the code I didn't really have any issues with it and can probably use it for hints on how to swap the tiles in and out, etc.. But, as I'm pretty new to obj-c/cocoa-touch, some pointers in the right direction would be appreciated. 1) Are there any other projects out there I am missing, or is openzoom my best bet for some reference? 2) Should I be trying to do this display in the UIKit framework, or should I do it as an OpenGL display? 3) Any other suggestions/pointers that I didn't think to ask.

    Read the article

  • How to efficiently show many Images? (iPhone programming)

    - by Thomas
    In my application I needed something like a particle system so I did the following: While the application initializes I load a UIImage laserImage = [UIImage imageNamed:@"laser.png"]; UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one: // add new Laserimage UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage]; [newLaser setTag:[model.lasers count]-9]; [newLaser setBounds:CGRectMake(0, 0, 17, 1)]; [newLaser setOpaque:YES]; [self.view addSubview:newLaser]; [newLaser release]; Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array: for (int i = 0; i < [model.lasers count]; i++) { [[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]]; } I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag. So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES? Thank you very much and sorry for my english! Greetings from Germany, Thomas

    Read the article

  • android & libgdx - disable blurry images rendering

    - by android developer
    i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering : for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) . however, for normal images , even though i show the entire image , i expect it to look as sharp as i see it on a computer , especially if i have such a good screen on the device (it's galaxy nexus) . i've tried to set the anti-aliasing off , by using the next code : final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration(); androidApplicationConfiguration.numSamples=0; //tried the value of 1 too. ... i've also tried to set the scaling method to various methods , but with no luck. example: texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest); as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in my case) . on the desktop app , it look ok . on the device , it looked blurred. a weird thing that i've noticed is that when i change the device's orientation (horizontal <= vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well . can anyone please help me?

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • Objective-C NSDate memory issue (again)

    - by Toby Wilson
    I'm developing a graphing application and am attempting to change the renderer from OpenGL to Quartz2D to make text rendering easier. A retained NSDate object that was working fine before suddenly seems to be deallocating itself, causing a crash when an NSMutableString attempts to append it's decription (now 'nil'). Build & analyse doesn't report any potential problems. Simplified, the code looks like this: NSDate* aDate -(id)init { aDate = [[NSDate date] retain] return self; } -(void)drawRect(CGRect)rect { NSMutableString* stringy = [[NSMutableString alloc] init]; //aDate is now deallocated and pointing at 0x0? [stringy appendString:[aDate description]]; //Crash } I should stress that the actual code is a lot more complicated than that, with a seperate thread also accessing the date object, however suitable locks are in place and when stepping through the code [aDate release] is not being called anywhere. Using [[NSDate alloc] init] bears the same effect. I should also add that init IS the first function to be called. Can anyone suggest something I may have overlooked, or why the NSDate object is (or appears to be) releasing itself?

    Read the article

  • Any high-level languages that can use c libraries?

    - by Isaiah
    I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work. But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine? I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C? Thanks so much guys

    Read the article

  • [Qt] How to get rid of OCI.dll dependency when compiling static

    - by STL
    Hi, My application accesses an Oracle database through Qt's QSqlDatabase class. I'm compiling Qt as static for the release build, but I can't seem to be able to get rid of OCI.dll dependency. I'm trying to link against oci.lib (as available in Oracle's Instant Client with SDK). Here's my configure line : configure -qt-libjpeg -qt-zlib -qt-libpng -nomake examples -nomake demos -no-exceptions -no-stl -no-rtti -no-qt3support -no-scripttools -no-openssl -no-opengl -no-phonon -no-style-motif -no-style-cde -no-style-cleanlooks -no-style-plastique -static -release -opensource -plugin-sql-oci -plugin-sql-sqlite -platform win32-msvc2005 I link against oci.h and oci.lib in the SDK's folder by using : set INCLUDE=C:\oracle\instantclient\sdk\include;%INCLUDE% set LIB=C:\oracle\instantclient\sdk\lib\msvc;%LIB% Then, once Qt is compiled, I use the following lines in my *.pro file : QT += sql CONFIG += static LIBS += C:\oracle\instantclient\sdk\lib\msvc\oci.lib QTPLUGIN += qsqloci Then, in my main.cpp, I add the following commands to statically compile OCI plugin in the application : #include <QtPlugin> Q_IMPORT_PLUGIN(qsqloci) After compiling the project, I test it on my workstation and it works (as I have Oracle Instant Client installed). When I try on another workstation, I always get the message: This application has failed to start because OCI.dll was not found. Re-installing this application may fix this problem. I don't understand why I still need OCI.dll, as my statically linked application is supposed to link to oci.lib instead. Is there any Qt people here that might have a solution for me ? Thanks a lot ! STL

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • Screen capture doesn't work on MFC application in Vista

    - by David Thornley
    We've got some in-house applications built in MFC, with OpenGL drawing routines. They all use the same code to draw on the screen and either print the screen or save it to a JPEG file. Everything's been working fine in Windows XP, and I need to find a way to make them work on Vista. In three of our applications, everything works. In the remaining one, I can get the window border, title bar, menus, and task bar, but the interior never shows up. As I said, these applications use the exact same code to write to the screen and capture the window image, and the only difference I see that looks like it might be relevant is that the problem application uses the MFC multiple document interface, while the ones that work use the single document interface. Either the answer isn't on the net, or I'm worse at Googling than I thought. I asked on the MSDN forums, and the only practical suggestion I got was to use GDI+ rather than GDI, and that did nothing different. I have tried different things with every part of the code that captures and prints or save, given a pointer to the window, so apparently it's a matter of the window itself. I haven't rebuilt the offending application using SDI yet, and I really don't have any other ideas. Has anybody seen anything like this?

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • Which Stroustrup book should I use?

    - by Chris Simmons
    I'm a C# programmer that is looking to branch out. I'm bored of writing business software and want to start getting into graphics programming and games/simulators. So I figured, although writing that stuff isn't impossible in managed code, the "right" way to do that would be to look to C++, of course focussing on the language first, then getting into OpenGL or DirectX (or whatever). Way way back ('98? '99?) I had tried and failed to really grasp Stroustrup's The C++ Programming Language. I know that this book is often not recommended for the beginner. Anyway, I picked it back up (in a much more recent printing) and I'm actually getting it and enjoying it. I also have a copy of his textbook, Programming: Principles and Practice Using C++, which, as I understand it, is really geared toward teaching programming, not necessarily C++. I'm certainly not arrogant enough to claim I don't have anything more to learn about programming, data structures, algoriths, etc., however I'm not a novice there either. So my question is, with the goal of gaining the broader and more real-world-useful understanding of C++ and given my background, on which should I focus? The denser (as I perceive it) TCPPPL or the gentler Programming? EDIT: I thank everyone for the responses. However, I've got a personal choice here to make between these two books. Granted there are other very good books out there, but I'm already a good length into both of the books I mention and I'd like to finish one. So, can anyone respond on which would be the better and why? Time is not an issue; I'm not looking (at this point) at an "accelerated" read.

    Read the article

  • VB6 Game Development

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "near-zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do suggest solutions with good performance and as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) UPDATE: Here is a screen shot of GearHead Garage. This picture ought to describe what i was attempting in words above... :) Thank You

    Read the article

  • How to create platform independent 3D video on 3D TV via HDMI 1.4?

    - by artif
    I am writing a real-time, interactive 3D visualization program and at each point in the program, I can compute 2 images (bitmaps) that are meant to look 3D together by means of stereoscopy. How do I get my program to display the image pairs such that they look 3D on a 3D TV? Is there a platform independent way of accomplishing it? (By platform I mean independent of GPU brand, operating system, 3D TV vendor, etc.) If not, which is preferable-- to lock in by GPU, OS, or 3D TV? I suppose I need to be using an HDMI 1.4 cable with the 3D TV? HDMI 1.4 can encode stereoscopy via side-by-side method. But how do I send such an encoded signal to the monitor? What kind of libraries do I use for this sort of thing? Windows DirectShow? If DirectShow is correct, is there a cross platform equivalent available? If anyone asks, yes I have seen this question: http://stackoverflow.com/questions/2811350/generating-3d-tv-stereoscopic-output-programmatically. However, correct me if I am wrong, it does not appear to be what I'm looking for. I do not have an OpenGL or Direct3D program that generates polygons, for which a Nvidia card can do ad-hoc impromptu stereoscopy simply by rendering the scene from 2 slightly offset points of view and then displaying those 2 images on the monitor-- my program already has those image pairs and needs to display them (and they are not the result of rendering polygons). Btw, I have never done any major multimedia programming before and know very little about HDMI, Direct Show, 3D TVs, etc so pardon me if any parts of this question did not make any sense at all.

    Read the article

  • OpenAL not playing on Max OS X 10.6

    - by Grimless
    I've been working on getting a basic audio engine running on my Mac using OpenAL. It seems relatively straightforward after working with OpenGL for a while. However, despite the fact that I believe I have everything in place, my sound will not play. Here is the order of things I am doing: //Creating a new device ALCdevice* device = alcOpenDevice(NULL); //Create a new context with the device ALCcontext* context = alcCreateContext(device, NULL); //Make that context current alcMakeContextCurrent(context); //Do lots of loading stuff to bring in an AIFF... voodooAIFF = myAIFFLoader("name"); //Then use that data ALuint buf; alGenBuffers(1, &buf); //Check for errors, but none happen... //Bind buffer data. alBufferData(buf, voodooAIFF.format, voodooAIFF.data, voodooAIFF.sizeInBytes, voodooAIFF.frequency); //Check for errors, none here either... //Create Source ALuint src; alGenSources(1, &src); //Error check again, no errors. //Bind source to buffer alSourcei(src, AL_BUFFER, buf); //Set reference distance alSourcei(sourceID, AL_REFERENCE_DISTANCE, 1); //Set source attributes including gain and pitch to 1 (direction set to 0,0,0) //Check for errors, nothing... //Set up listener attributes. //Check for errors, no errors. //Begin playing. alSourcePlay(src); Observe silence... Any insight, what steps am I missing here?

    Read the article

  • about c# OBJECTS and the Possibilties it has.

    - by user527825
    As a novice programmer and i always wonder about c# capabilities.i know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. 1- I think c# is a proprietary language (i don't know if i said that right) meaning you can't do it outside visual studio or windows. 2-also you cant create your own controller(called object right?) like you are forced to use these available in toolbox and their properties and methods. 3-can c# be used with openGL API or DirectX API . 4-Finally it always bothers me when i think i start doing things in visual studio, i know it sounds arrogant to say but sometimes i feel that i don't like to be forced to use something even if its helpful, like i feel (do i have the right to feel?) that i want to do all things by myself? don't laugh i just feel that this will give me a better understanding. 5- is visual c# is like using MaxScript inside 3ds max in that c# is exclusive to do windows and forms and components that are windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner i hope you don't answer the fourth question as i don't have enough motivation and i want to keep the little i have. thank you for your time. Note: 1-sorry for my English, i am self taught and never used the language with native speakers so expect so errors. 2-i have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. thank you for your time.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101  | Next Page >