Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 150/294 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Testing for Auto Save and Load Game

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game every time the newbies open the first app then continues it on the other day? I'm trying to make a simple app with a simple moving block sprite, starting at the center coordinate. Once I moved the sprite to the top by touch n' drag, I touch the back key to close the app. I expected that once I re-open the app and the block sprite is still at the top. But instead, it goes back to the center instead. Where can I find more ways to use the preferences or manipulating by telling the dispose method to dispose only specific wastes but not the important records (fastest time, last time where the sprite is located via coordinates, etc.). Is there really an easy way or it has no shortcuts but most effective way? I need help to expand more ideas. Thank you. Here are the following links that I'm trying to figure it out how to use them: http://www.youtube.com/watch?v=gER5GGQYzGc http://www.badlogicgames.com/wordpress/?p=1585 http://www.youtube.com/watch?v=t0PtLexfBCA&feature=relmfu Take note that these links above are codes. But I'm not looking answers for code but to look how to start or figure it out how to use them. Tell me if I'm wrong.

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Partner Webcast – Oracle CRM: The Age of the Customer - 18 July 2013

    - by Thanos
    High-touch solutions for the complete customer experience How does Customer Relationship Management change in "the age of the customer", or does it at all? Customer relationship management has changed over the past years from a pure "inside out" point of view, where the customer is the center of attention to an "outside in" discipline where the customer has become the driving force. Away from the 360° view, through data to a holistic view of the customer’s journey and experience, through behavioral analysis and interaction across all touch points along a lifecycle of a customer relationship. Learn how this approach, integrating sales, service and marketing channels into one cohesive customer experience can drive customer experience and support acquisition, retention and efficiency in your customer relationship. With Oracle's Sales, Service and Marketing cloud offerings, you can be ahead of the game and provide a consistent and personalized voice to your customers, regardless of which channels you favor and your customers prefer. Integrated, cross-channel campaign automation and service delivery, as well as feedback-loops to sales automation, will provide you with tools to achieve top-of-the-line customer experience. Agenda · Oracle Customer Experience - Introduction into a new take on CRM · Oracle Sales Cloud - Integrated Salesforce Automation · Oracle Marketing Cloud - Cross-Channel Campaign Management · Oracle Service Cloud - Channel-blending in service delivery Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24 hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com.

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • TU Berlin wurde mit Oracle Spatial Excellence Award ausgezeichnet

    - by britta wolf
    An der TU Berlin befasst sich das Institut für Geodesie und Geoinformationstechnik von Prof. Thomas Kolbe seit Jahren mit der Verwaltung und Analyse von raumbezogenen Daten in Oracle Spatial. Im Rahmen der diesjährigen Oracle Spatial User Conference in Washington wurden er und sein Team für die herausragenden Ergebnisse der Forschungsarbeiten im Bereich der 3-dimensionalen Stadt- und Geländemodelle ausgezeichnet. Insbesondere die Entwicklung der 3D City Database (3DCityDB), die die Erstellung von virtuellen 3D Stadtmodellen auf Basis von Oracle ermöglicht, sowie das Engagement in der Verbreitung und Weiterentwicklung des CityGML Standards, wurden gewürdigt. Die 3DCityDB mit ihrem Datenbankschema und den zugehörigen Tools steht als Open Source zur Verfügung und wird bereits international in zahlreichen Projekten eingesetzt. Interessiert? Zum Thema Oracle Spatial findet unter dem Titel "Wie kommen die Daten auf die Karte? Anwendungsbeispiele zur Integration von Geodaten" am 11. September um 11:00h ein einführender Webcast statt. Nähere Details dazu gibt es hier.  (Danke an unseren Kollegen Hans Viehmann (aus dem hohen Norden) für diesen Beitrag!)

    Read the article

  • high performance with xen, vmware or virtualbox

    - by Marchosius
    I was wondering which is the best method to go about if I want to play win based games. I do not want to go with the dual boot method as this will cost me time to restart, login and run a os to do my work or pass the time, and some of my apps rely on win and my graphics to run. for example Daz3d, Photoshop, Flash etc. Now I read about HVM(hardware virtual machines) and then I know about the 3D virtualisation of VMware and VirtualBox. How ever the 2 later virtualise the 3D not using the full power of the GPU. So this option wont perform perfect for latest games like D3. I was wondering if anyone have experience in HVM(like xen if i am not mistaken) and tried something similar to access the full power of the GPU and successfully run newer games and other products relying on the GPU? Will be the first time setting up a HVM, no experience in this so don't know what to expect. This will help a lot as I do not want to revert back to win or as mentioned do dual boot.

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Hex Dump using LINQ (in 7 lines of code)

    - by Fabrice Marguerie
    Eric White has posted an interesting LINQ query on his blog that shows how to create a Hex Dump in something like 7 lines of code.Of course, this is not production grade code, but it's another good example that demonstrates the expressiveness of LINQ.Here is the code:byte[] ba = File.ReadAllBytes("test.xml");int bytesPerLine = 16;string hexDump = ba.Select((c, i) => new { Char = c, Chunk = i / bytesPerLine })    .GroupBy(c => c.Chunk)    .Select(g => g.Select(c => String.Format("{0:X2} ", c.Char))        .Aggregate((s, i) => s + i))    .Select((s, i) => String.Format("{0:d6}: {1}", i * bytesPerLine, s))    .Aggregate("", (s, i) => s + i + Environment.NewLine);Console.WriteLine(hexDump); Here is a sample output:000000: FF FE 3C 00 3F 00 78 00 6D 00 6C 00 20 00 76 00000016: 65 00 72 00 73 00 69 00 6F 00 6E 00 3D 00 22 00000032: 31 00 2E 00 30 00 22 00 20 00 65 00 6E 00 63 00000048: 6F 00 64 00 69 00 6E 00 67 00 3D 00 22 00 75 00000064: 3E 00Eric White reports that he typically notices that declarative code is only 20% as long as imperative code. Cross-posted from http://linqinaction.net

    Read the article

  • Project Corndog: Viva el caliente perro!

    - by Matt Christian
    During one of my last semesters in college we were required to take a class call Computer Graphics which tried (quite unsuccessfully) to teach us a combination of mathematics, OpenGL, and 3D rendering techniques.  The class itself was horrible, but one little gem of an idea came out of it.  See, the final project in the class was to team up and create some kind of demo or game using techniques we learned in class.  My friend Paul and I teamed up and developed a top down shooter that, given the stringent timeline, was much less of a game and much more of 3D objects floating around a screen. The idea itself however I found clever and unique and decided it was time to spend some time developing a proper version of our idea.  Project Corndog as it is tentatively named, pits you as a freshly fried corndog who broke free from the shackles of fair food slavery in a quest to escape the state fair you were born in.  Obviously it's quite a serious game with undertones of racial prejudice, immoral practices, and cheap food sold at high prices. The game itself is a top down shooter in the style of 1942 (NES).  As a delicious corndog you will have to fight through numerous enemies including hungry babies, carnies, and the corndog serial-killer himself the corndog eating champion!  Other more engaging and frighteningly realistic enemies await as the only thing between you and freedom. Project Corndog is being developed in Visual Studio 2008 with XNA Game Studio 3.1.  It is currently being hosted on Google code and will be made available as an open source engine in the coming months.

    Read the article

  • Custom Gesture in cocos2d

    - by Lewis
    I've found a little tutorial that would be useful for my game: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ But I can't work out how to convert that gesture to work with cocos2d, I have found examples of pre made gestures in cocos2d, but no custom ones, is it possible? EDIT STILL HAVING PROBLEMS WITH THIS: I've added the code from Sentinel below (from SO), the Gesture and RotateGesture have both been added to my solution and are compiling. Although In the rotation class now I only see selectors, how do I set those up? As the custom gesture found in that project above looks like: header file for custom gesture: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end .m for custom gesture file: #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Then its initialised like this: // calculate center and radius of the control CGPoint midPoint = CGPointMake(image.frame.origin.x + image.frame.size.width / 2, image.frame.origin.y + image.frame.size.height / 2); CGFloat outRadius = image.frame.size.width / 2; // outRadius / 3 is arbitrary, just choose something >> 0 to avoid strange // effects when touching the control near of it's center gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; The selector below is also in the same file where the initialisation of the gestureRecogonizer: - (void) rotation: (CGFloat) angle { // calculate rotation angle imageAngle += angle; if (imageAngle > 360) imageAngle -= 360; else if (imageAngle < -360) imageAngle += 360; // rotate image and update text field image.transform = CGAffineTransformMakeRotation(imageAngle * M_PI / 180); [self updateTextDisplay]; } I can't seem to get this working in the RotateGesture class can anyone help me please I've been stuck on this for days now. SECOND EDIT: Here is the users code from SO that was suggested to me: Here is projec on GitHub: SFGestureRecognizers It uses builded in iOS UIGestureRecognizer, and don't needs to be integrated into cocos2d sources. Using it, You can make any gestures, just like you could, if you whould work with UIGestureRecognizer. For example: I made a base class Gesture, and subclassed it for any new gesture: //Gesture.h @interface Gesture : NSObject <UIGestureRecognizerDelegate> { UIGestureRecognizer *gestureRecognizer; id delegate; SEL preSolveSelector; SEL possibleSelector; SEL beganSelector; SEL changedSelector; SEL endedSelector; SEL cancelledSelector; SEL failedSelector; BOOL preSolveAvailable; CCNode *owner; } - (id)init; - (void)addGestureRecognizerToNode:(CCNode*)node; - (void)removeGestureRecognizerFromNode:(CCNode*)node; -(void)recognizer:(UIGestureRecognizer*)recognizer; @end //Gesture.m #import "Gesture.h" @implementation Gesture - (id)init { if (!(self = [super init])) return self; preSolveAvailable = YES; return self; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)recognizer shouldReceiveTouch:(UITouch *)touch { //! For swipe gesture recognizer we want it to be executed only if it occurs on the main layer, not any of the subnodes ( main layer is higher in hierarchy than children so it will be receiving touch by default ) if ([recognizer class] == [UISwipeGestureRecognizer class]) { CGPoint pt = [touch locationInView:touch.view]; pt = [[CCDirector sharedDirector] convertToGL:pt]; for (CCNode *child in owner.children) { if ([child isNodeInTreeTouched:pt]) { return NO; } } } return YES; } - (void)addGestureRecognizerToNode:(CCNode*)node { [node addGestureRecognizer:gestureRecognizer]; owner = node; } - (void)removeGestureRecognizerFromNode:(CCNode*)node { [node removeGestureRecognizer:gestureRecognizer]; } #pragma mark - Private methods -(void)recognizer:(UIGestureRecognizer*)recognizer { CCNode *node = recognizer.node; if (preSolveSelector && preSolveAvailable) { preSolveAvailable = NO; [delegate performSelector:preSolveSelector withObject:recognizer withObject:node]; } UIGestureRecognizerState state = [recognizer state]; if (state == UIGestureRecognizerStatePossible && possibleSelector) { [delegate performSelector:possibleSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateBegan && beganSelector) [delegate performSelector:beganSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateChanged && changedSelector) [delegate performSelector:changedSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateEnded && endedSelector) { preSolveAvailable = YES; [delegate performSelector:endedSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateCancelled && cancelledSelector) { preSolveAvailable = YES; [delegate performSelector:cancelledSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateFailed && failedSelector) { preSolveAvailable = YES; [delegate performSelector:failedSelector withObject:recognizer withObject:node]; } } @end Subclass example: //RotateGesture.h #import "Gesture.h" @interface RotateGesture : Gesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed; @end //RotateGesture.m #import "RotateGesture.h" @implementation RotateGesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed { if (!(self = [super init])) return self; preSolveSelector = preSolve; delegate = target; possibleSelector = possible; beganSelector = began; changedSelector = changed; endedSelector = ended; cancelledSelector = cancelled; failedSelector = failed; gestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(recognizer:)]; gestureRecognizer.delegate = self; return self; } @end Use example: - (void)addRotateGesture { RotateGesture *rotateRecognizer = [[RotateGesture alloc] initWithTarget:self preSolveSelector:@selector(rotateGesturePreSolveWithRecognizer:node:) possibleSelector:nil beganSelector:@selector(rotateGestureStateBeganWithRecognizer:node:) changedSelector:@selector(rotateGestureStateChangedWithRecognizer:node:) endedSelector:@selector(rotateGestureStateEndedWithRecognizer:node:) cancelledSelector:@selector(rotateGestureStateCancelledWithRecognizer:node:) failedSelector:@selector(rotateGestureStateFailedWithRecognizer:node:)]; [rotateRecognizer addGestureRecognizerToNode:movableAreaSprite]; } I dont understand how to implement the custom gesture code at the start of this post into the rotateGesture class which is a subclass of the gesture class written by the SO user. Any ideas please? When I get 6 more rep I'll add a bounty to this.

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • Want an iPad? How-To Geek is Giving One Away!

    - by The Geek
    That’s right. All you have to do to enter is become a fan of our Facebook page, and we’ll pick a random fan to win the prize. Once we’ve got 10,000 fans, we’ll change the prize from an iPod Touch to an iPad 16GB (typo in the image above). Everybody who is already a fan is already automatically entered in the contest!  (there’s no country restriction). So to make sure we upgrade the prize, make sure to share the How-To Geek Facebook Fan page with your friends that might be interested (don’t mindlessly spam everybody, of course). We’ve already got the iPad sitting here. Win an iPad on the How-To Geek Facebook Fan Page Similar Articles Productive Geek Tips Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Win a Free iPod Touch in the How-To Geek Facebook Giveaway!Geek Software: Use DeliCount to Get Site-wide del.icio.us Bookmark CountsFix for Problems with How-To Geek Sidebar GadgetSet Gmail as Default Mail Client in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic

    Read the article

  • NRF Week - Disney Store Tour

    - by sarah.taylor(at)oracle.com
    Disney has created a real buzz at this year's NRF event. Yesterday morning we began the Oracle Retail Exchange program with a visit to the flagship Disney store in Times Square. Additionally Oracle made a key announcement with Disney  on Oracle Retail's Point of Sale implementation in 330 stores worldwide. Today   Disney's Steve Finney gave a super session on The Magic of Disney at the NRF Big Show. We also saw Disney making an exclusive news announcement about their plans for Global store openings at the Oracle trade show stand - with a little help from Mickey and Minnie Mouse. Disney Stores have been entirely reinvented since the company in 2008 took ownership after previously franchising the retail arm of the business. They have subsequently been a strong Oracle partner and technology has played a key role in their re imagination of the store environment. The new Imagination stores have a 20% higher footfall and margins are up 25%. The Disney brand is synonymous with magical and memorable experiences for children of all ages. The company is achieving a unique retail experience that delights children and shareholders alike! Technology is a key pillar in helping to deliver on both a strong operating model and a unique customer experience - the best thirty minutes in a child's day is their aim. Steve Finney this morning said their technology has to be as reliable as a theme park ride. Store experiences are much more enjoyable when there are short waiting times and children can interact with their favourite characters through magic mirrors, mobile point of sale, touch screens and custom animations that are digitally transmitted to stores globally. The Oracle Retail Point of Sale with iPad touch screens reduces check out times, stores customer data, ensures that promotions are delivered accurately and reduces losses. This means higher levels of guest conversion, increased availability and convenience for customers who want to check availability at other locations. Disney is a pioneer. At NRF's 100th show, we had the privilege of learning from a retailer using technology as a creative force to drive their business forward.

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • Programming as a minor

    - by Tomas Cokis
    Hello Everyone! I've never asked a question here at programmers, and for reasons which will become obvious later I've never answered one here, but I do poke around in short bursts. Anyway, I'm 15 right now, and I've been programming in C++ for 4 years, just working on my own projects that are aim so high as to never be finished. I've been working on a single project for the last year, and every 3 months, I add a new system into it. It might be a value tabling directory enabled log system, or a render system, or a class to load up xml files, whatever it is, I don't mind too much that the overall project (a 3d engine) isn't ever going to get finished, I just get some satisfaction from getting what I have done building and running. I don't know what I want to do when I grow up, although I suspect I'll go into some form of engineering, but I was interested in knowing if I do choose to go into a career as a developer, what kind of material I could look at to push myself up and get myself experience that might help my career later. I'm not talking about books in particular, I'm more interested in subjects areas that will get me access to good job opportunities, or that will give me a hand-up if I do computer science and software related courses at uni. One of the things I was thinking of doing was designing some of the logic gate components of a small computer - which I started briefly over the holidays, working out integer addition, subtraction and multiplication. That kind of stuff interests me, but is it really useful - or more useful then just more programming? But anyway, Any advice? Should I continue on my perpetual 3d engine? Are there any other projects or particular accomplishments that would help my education? Perhaps I should mention that I live in Perth, Australia, so local software companies are likely to be more scarce then usual.

    Read the article

  • Run a startup script with lightdm

    - by cheshirekow
    I have a tablet PC and the graphics driver doesn't support xrandr, so in order to rotate the screen I run a script which changes the Xorg.conf file and then restarts lightdm. I also have a script which uses xsetwacom and xinput to change the rotation of the input devices so that the match the new orientation. I've learned how to get the script to run when I login, but I'd like it to run before I login, so that I don't have to enable auto-login with lightdm. I do need it to run though, or the input (touch and pen) is rotated with respect to the screen, so that when I touch the screen the input is in a completely different area, making it really difficult to use the onscreen keyboard. I've looked at other questions on this site. I've tried putting my script in /etc/Xsession.d but that didn't seem to work. I also tried putting it in /etc/rc.local but I think that is the wrong place, nothing seems to happen. I've also tried googling for lightm script hooks, and various other google terms. Any suggestions? Edit 1: After doing some research, it seems to me that it might not be that I want to run a script with lightdm, but rather with the lighdm greeter (in this case, I think the unity-greeter?). Are there any script-hooks for the unity-greeter?

    Read the article

  • Octree implementation for fustrum culling

    - by Manvis
    I'm learning modern (=3.1) OpenGL by coding a 3D turn based strategy game, using C++. The maps are composed of 100x90 3D hexagon tiles that range from 50 to 600 tris (20 different types) + any player units on those tiles. My current rendering technique involves sorting meshes by shaders they use (minimizing state changes) and then calling glDrawElementsInstanced() for drawing. Still get solid 16.6 ms/frame on my GTX 560Ti machine but the game struggles (45.45 ms/frame) on an old 8600GT card. I'm certain that using an octree and fustrum culling will help me here, but I have a few questions before I start implementing it: Is it OK for an octree node to have multiple meshes in it (e.g. can a soldier and the hex tile he's standing on end up in the same octree node)? How is one supposed to treat changes in object postion (e.g. several units are moving 3 hexes down)? I can't seem to find good a explanation on how to do it. As I've noticed, soting meshes by shaders is a really good way to save GPU. If I put node contents into, let's say, std::list and sort it before rendering, do you think I would gain any performance, or would it just create overhead on CPU's end? I know that this sounds like early optimization and implementing + testing would be the best way to find out, but perhaps someone knows from experience?

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • Need a PCIe desktop graphics card for dual-monitor

    - by Graham
    I have a mid-2008 workstation with two HD monitors supporting HDMI and DVI inputs. Since Ubuntu 11.10, I have experienced no end of trouble with my NVidia Quadro NVS 290 in TwinView dual-monitor output. Others have similar desktop TwinView woes. I want a new graphics card. Previously I asked for a graphics card recommendation and response was Nvidia Geforce GTS 450... but really I'm looking for someone who has actually got a working dual-monitor desktop to tell me what card they use so I can get something that is known to work. So please, people who have no-issues with their 64-bit Ubuntu 12.04 Unity 3D desktop spread across two HD-resolution external monitors (either DVI or HDMI connector), and who also run Google Chrome (which throws a spanner due to its own GPU compositing)... please let me know what graphics card you have so I can buy one. Gathering Options These seem to be the Nvidia cards featuring dual DVI. But they all seem to be gaming cards - what has dual-DVI, good support, but is not a massive gaming card? Nvidia GTS 450 (previously recommended) - 2x DVI Nvidia GTX 550 Ti (used by System76) - 2x DVI Nvidia GT 430 (used by System76) - 1xDVI, 1xHDMI Nvidia GT 640 (found on NVidia site) - 1xDVI, 1xHDMI (also GT 620, GT 630) Has anyone had a good desktop dual-monitor Unity 3D experience with ATI cards?

    Read the article

  • .NET Reflector Pro T-shirt contest - and the winner is...

    - by Laila
    Three weeks ago, I kicked off a T-shirt design contest. We've been eagerly poring over the results and today, it's finally announcement time! Although many of you raced to design some great t-shirts for us, we ended up with a clear winner who came up with a nice design and an original slogan that accurately represents what .NET Reflector Pro lets you do: decompile and debug C# and VB.NET code. So, the winner is... Mandeep Sangha! Mandeep sent us the following awesome design via the Twitter account, mss_10: We liked the combination of detective and superhero elements through the magnifying glass and the slogan. Batman (possibly the most eminent of detective-superheroes?) would be proud to wear this under his suit. Mandeep will become the happy owner of a free copy of .NET Reflector Pro and an exciting box of Red Gate goodies... as well as a copy of their very own t-shirt once it's been brought to life by our printing shop! The t-shirts will bear the name of their designer, and will be made available at .NET developer events around the world, such as conferences, tradeshows and user group events. Congratulations, Mandeep! We'll be in touch to sort out the details of your prizes. But that wasn't the only great design we received. We chose three runners-up as well: Sam Beauvois: http://twitpic.com/1vvsi9 Sherwin Rice: http://www.greenwaytechno.com/img/tee-1.png Mathieu Grétry: http://blog.section9.be/public/tshirt_reflector_01.png Thanks to you all for taking part in the contest. You'll all receive a free license for .NET Reflector Pro! We'll get in touch with you individually through twitter, so that we can get you your prizes. Keep an eye out for this T-shirt - it'll soon be making its way to an event near you!

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • All chromium extensions throw errors since update to 13.10

    - by hugo der hungrige
    Since updating to 13.10 all chromium extensions generate errors: chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot call method 'sendRequest' of undefined include.preload.js:105 Uncaught TypeError: Cannot read property 'onRequest' of undefined include.postload.js:473 GET http://edge.quantserve.com/quant.js superuser.com/:2047 GET http://www.google-analytics.com/__utm.gif?utmwv=5.4.5&utms=2&utmn=590704726…n%3D(organic)%7Cutmcmd%3Dorganic%7Cutmctr%3D(not%2520provided)%3B&utmu=qQ~ ga.js:61 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot read property 'onRequest' of undefined content.js:233 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot read property 'onRequest' of undefined injected.js:169 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot call method 'getURL' of undefined content_js_min.js:5 GET http://engine.adzerk.net/z/8476/adzerk2_2_17_47 superuser.com/:1719 Uncaught TypeError: Cannot call method 'sendRequest' of undefined How to fix this?

    Read the article

  • Leveraging Code in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Partial upgrade on 12.04, how to stop nagging after locking to a working NVIDIA & xorg

    - by alsk
    How to stop the upgrade manager from offering updates and upgrades that potentially would harm my working 2D and 3D graphics? Finally, I got 12.04 working as it should: with nvidia-173 drivers by downgrading xorg and locking the version: On my 32-bit system on Athlon64, with (Albatron) NVIDIA GeForce FX5700XT, locked (/pinned) to xorg 1:7.6-7ubuntu7, xserver-xorg-core 2:11.1-0obuntu10.07, nvidia-173 173.14.35-0ubuntu0.2? An annoying thing left is that every time the updates are checked, I get warning of partial updates, and ambiguous options of "partial update" and "close". Ambiguous in that sense that if I click close, I will get option to update a few packages, which has been OK, while "partial update" would like to update my kernel to 3.2, alter xorg, remove nvidia-173 etc., and update mesa etc. This is not what I call appropriate, after locking XORG and NVIDIA drivers to working ones. One may say according to package management logic it may be correct, but to me as an user it makes little sense. Last Ubuntu that worked without big mess for me was 10.10, hence I will not put 12.10 to my "production" system, until I can be sure it will not trash the system again. P.S. Is there a recommended way to keep NVIDIA GeForce FX working with 3D on Ubuntu... in future?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >