Search Results

Search found 1587 results on 64 pages for 'pixel reaper'.

Page 50/64 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • ajax request internal server error

    - by joe
    Everything is working good on local but when i try same codes in production, i get 500 (Internal Server Error) error. entries.controller def set_spam @entry = Entry.find(params[:entry_id]) @entry.spam = params[:what] == "spam" ? true : false @entry.save respond_to do |format| format.js end end application.js $(".entry-actions .spams img").click(function () { $.post("/set-spam", { entry_id: $(this).attr("entry_id"), what: $(this).attr("class") } ); return false; }); view <div class="spams"> <img title="spam" class="spam" src="/images/pixel.gif" entry_id="<%= entry.id %>" /> </div> route post "/set-spam" => "entries#set_spam"

    Read the article

  • C++ Reading and Editing pixels of a bitmap image

    - by BettyD
    I'm trying to create a program which reads an image in (starting with bitmaps because they're the easiest), searches through the image for a particular parameter (i.e. one pixel of (255, 0, 0) followed by one of (0, 0, 255)) and changes the pixels in some way (only within the program, not saving to the original file.) and then displays it. I'm using Windows, simple GDI for preference and I want to avoid custom libraries just because I want to really understand what I'm doing for this test program. Can anyone help? In any way? A website? Advice? Example code? Thanks

    Read the article

  • CSS: can change height by px, but not by %

    - by cag8f
    I am trying to edit the CSS of my Wordpress theme. I have an element whose height I can successfully change from within Element Inspector, if I specify a certain pixel height, e.g. height=100px; But when I try to change the height by specifying a percentage, e.g. height=50%; the element does not change height. Any thoughts on what I'm doing wrong, or how to troubleshoot? None of the parent elements appear to have any height properties.

    Read the article

  • Possible to make jqGrid stretch to 100%?

    - by Earlz
    Is it possible to make it so that a jqGrid will have a width set to 100%? I understand that column widths must be an absolute pixel size, but I've yet to find anything for setting the width of the actual grid to a relative size. For instance, I want to set the width to 100%. Instead of 100% it seems to use an odd size of 450px. There is more horizontal room on the page, but with the columns width and such, it will make the container(of only the grid) horizontally scroll. Is there some way around this?

    Read the article

  • Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform]

    - by Asian Angel
    Are you looking for an easy way to create custom sized thumbnail images for use in blog posts, photo albums, and more? Whether is it a single image or a CD full, Simple Image Resizer is the right app to get the job done for you. To add the new PPA for Simple Image Resizer open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Rafael Sachetto on the left (highlighted with red in the image). The listing for Simple Image Resizer will be right at the top…click Install to add the program to your system. After the installation is complete you can find Simple Image Resizer listed as Sir in the Graphics sub-menu. When you open Simple Image Resizer you will need to browse for the directory containing the images you want to work with, select a destination folder, choose a target format and prefix, enter the desired pixel size for converted images, and set the quality level. Convert your image(s) when ready… Note: You will need to determine the image size that best suits your needs before-hand. For our example we chose to convert a single image. A quick check shows our new “thumbnailed” image looking very nice. Simple Image Resizer can convert “into and from” the following image formats: .jpeg, .png, .bmp, .gif, .xpm, .pgm, .pbm, and .ppm Command Line Installation Note: For older Ubuntu systems (9.04 and previous) see the link provided below. sudo add-apt-repository ppa:rsachetto/ppa sudo apt-get update && sudo apt-get install sir Links Note: Simple Image Resizer is available for Ubuntu, Slackware Linux, and Windows. Simple Image Resizer PPA at Launchpad Simple Image Resizer Homepage Command Line Installation for Older Ubuntu Systems Bonus The anime wallpaper shown in the screenshots above can be found here: The end where it begins [DesktopNexus] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • American Modern Insurance Group recognized at 2010 INN VIP Best Practices Awards

    - by [email protected]
    Below: Helen Pitts (right), Oracle Insurance, congratulates Bruce Weisgerber, Munich Re, as he accepts a VIP Best Practices Award on behalf of American Modern Insurance Group.     Oracle Insurance Senior Product Marketing Manager Helen Pitts is attending the 2010 ACORD LOMA Insurance Forum this week at the Mandalay Bay Resort in Las Vegas, Nevada, and will be providing updates from the show floor. This is one of my favorite seasons of the year--insurance trade show season. It is a time to reconnect with peers, visit with partners, make new industry connections, and celebrate our customers' achievements. It's especially meaningful when we can share the experience of having one of our Oracle Insurance customers recognized for being an innovator in its business and in the industry. Congratulations to American Modern Insurance Group, part of the Munich Re Group. American Modern earned an Insurance Networking News (INN) 2010 VIP Best Practice Award yesterday evening during the 2010 ACORD LOMA Insurance Forum. The award recognizes an insurer's best practice for use of a specific technology and the role, if feasible, that ACORD data standards played as a part of their business and technology. American Modern received an Honorable Mention for leveraging the Oracle Documaker enterprise document automation solution to: Improve the quality of communications with customers in high value, high-touch lines of business Convert thousands of page elements or "forms" from their previous system, with near pixel-perfect accuracy Increase efficiency and reusability by storing all document elements (fonts, logos, approved wording, etc.) in one place Issue on-demand documents, such as address changes or policy transactions to multiple recipients at once Consolidate all customer communications onto a single platform Gain the ability to send documents to multiple recipients at once, further improving efficiency Empower agents to produce documents in real time via the Web, such as quotes, applications and policy documents, improving carrier-agent relationships Munich Re's Bruce Weisgerber accepted the award on behalf of American Modern from Lloyd Chumbly, vice president of standards at ACORD. In a press release issued after the ceremony Chumbly noted, "This award embodies a philosophy of efficiency--working smarter with standards, these insurers represent the 'best of the best' as chosen by a body of seasoned insurance industry professionals." We couldn't agree with you more, Lloyd. Congratulations again to American Modern on your continued innovation and success. You're definitely a VIP in our book! To learn more about how American Modern is putting its enterprise document automation strategy into practice, click here to read a case study. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • How to stream H264 Video from camera over FTP?

    - by Jay
    I bought a h264 security camera system last year and set it up to ftp video to my computer. I was able to get the video to play (even though it played a little fast) on Ubuntu 11.04 using mplayer. A few months ago, I did a fresh install of 12.04 and I cannot seem to get the video to play with mplayer, smplayer or VLC. I have the restricted formats video packages installed and when playing with any of the players, all I get is a gray video. When calling mplayer from the command line to play the video with no options, I get a lot of these errors: [h264 @ 0x7f278c61f280]concealing 1320 DC, 1320 AC, 1320 MV errors No pts value from demuxer to use for frame! pts after filters MISSING I'm not a video expert and have been coming up with a lot of dead ends when Googling for this. Could someone offer some advice about how to play these videos? Here is the output of mediainfo for a sample file. mediainfo -f sec-cam01-m-20120921-212454.h264 General Count : 278 Count of stream of this kind : 1 Kind of stream : General Kind of stream : General Stream identifier : 0 Count of video streams : 1 Video_Format_List : AVC Video_Format_WithHint_List : AVC Codecs Video : AVC Complete name : sec-cam01-m-20120921-212454.h264 File name : sec-cam01-m-20120921-212454 File extension : h264 Format : AVC Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Format/Extensions usually used : avc h264 Commercial name : AVC Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec/Extensions usually used : avc h264 File size : 1097315 File size : 1.05 MiB File size : 1 MiB File size : 1.0 MiB File size : 1.05 MiB File size : 1.046 MiB File last modification date : UTC 2012-09-22 01:27:12 File last modification date (local) : 2012-09-21 21:27:12 Video Count : 205 Count of stream of this kind : 1 Kind of stream : Video Kind of stream : Video Stream identifier : 0 Format : AVC Format/Info : Advanced Video Codec Format/Url : http://developers.videolan.org/x264.html Commercial name : AVC Format profile : [email protected] Format settings : 1 Ref Frames Format settings, CABAC : No Format settings, CABAC : No Format settings, ReFrames : 1 Format settings, ReFrames : 1 frame Format settings, GOP : M=1, N=3 Internet media type : video/H264 Codec : AVC Codec : AVC Codec/Family : AVC Codec/Info : Advanced Video Codec Codec/Url : http://developers.videolan.org/x264.html Codec profile : [email protected] Codec settings : 1 Ref Frames Codec settings, CABAC : No Codec_Settings_RefFrames : 1 Width : 704 Width : 704 pixels Height : 480 Height : 480 pixels Pixel aspect ratio : 1.000 Display aspect ratio : 1.467 Display aspect ratio : 3:2 Standard : NTSC Resolution : 8 Resolution : 8 bits Colorimetry : 4:2:0 Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 Bit depth : 8 bits Scan type : Progressive Scan type : Progressive Interlacement : PPF Interlacement : Progressive Edit: Here is a sample video using the same encoding: https://www.dropbox.com/s/l5acwzy8rtqn9xe/sec-cam08-m-20121118-105815.h264 (not the same video as mediainfo output)

    Read the article

  • Silverlight Cream for April 08, 2010 -- #834

    - by Dave Campbell
    In this Issue: Michael Washington, Phil Middlemiss, Yochay Kiriaty, Giorgetti Alessandro, Mike Snow, John Papa, SilverLaw, smartyP, and Pete Brown. Shoutouts: Steve Wortham sent me a link to his RegEx tool that is written in Silverlight... definitely worth a look: Introducing Code Hinting for Regular Expressions Joshua Blake posted his MIX10 materials: MIX10 NUI session sample code From SilverlightCream.com: Silverlight MVVM: An (Overly) Simplified Explanation Michael Washington has a tutorial up for getting your arms (and head) around MVVM and Silverlight, and Blend too. A Chrome and Glass Theme - Part 3 Phil Middlemiss has part 3 up of his tutorial series on building an awesome theme for Silverlight... he's styling the textbox and checkbox this time around, and improving the button too Automatic Rotation Support or Automatic Multi-Orientation Layout Support for Windows Phone Yochay Kiriaty is giving up some WP7 goodness with his post on Multi-Orientation Layout Support ... yeah I had to say it twice myself :) good links and all the code in addition to the good blog post Silverlight Navigation Framework: resolve the pages using an IoC container Giorgetti Alessandro has some pretty cool code up as a proof of concept of using an IoC container with the Navigation Framework of Silverlight 4. Silverlight Tip of the Day No. 109 – Attach to Process Debugging Mike Snow is back doing Tips of the Day... and number 109 is showing how to attach the debugger to a running Silverlight app. Silverlight TV 20: Community Driven Development with WCF RIA Services In his latest Silverlight TV episode, John Papa talks with Jeff Handley about RIA Services, and how feedback from the community helped shape the product. ChildWindowMouseScrollResizeBehavior - Silverlight 3 SilverLaw has a new Behavior up at the Expression Gallery that gives you resizing on a ChildWindow using the Mouse Wheel. Creating a Windows Phone 7 Metro Style Pivot Application [Part 3] smartyP has the 3rd and final episode for his WP7 Pivot up, and this one includes not only the source but a video tutorial. Layout Rounding Pete Brown talks about Layout Rounding and it has nothing to do with rounding corners... it has to do with rounding off where your objects get placed pixel-wise ... I've blogged about this seemingly-anti-aliasing more than once... Pete has the real answer Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • New Pluralsight Course: HTML5 Canvas Fundamentals

    - by dwahlin
      I just finished up a new course for Pluralsight titled HTML5 Canvas Fundamentals that I had a blast putting together. It’s all about the client and involves a lot of pixel manipulation and graphics creation which is challenging and fun at the same time. The goal of the course is to walk you through the fundamentals, start a gradual jog into the API functions, and then start sprinting as you learn how to build a business chart canvas application from scratch that uses many of the available APIs . It’s fun stuff and very useful in a variety of scenarios including Web (desktop or mobile) and even Windows 8 Metro applications. Here’s a sample video from the course that talks about building a simple bar chart using the HTML5 Canvas:   Additional details about the course are shown next.   HTML5 Canvas Fundamentals The HTML5 Canvas provides a powerful way to render graphics, charts, and other types of visual data without relying on plugins such as Flash or Silverlight. In this course you’ll be introduced to key features available in the canvas API and see how they can be used to render shapes, text, video, images, and more. You’ll also learn how to work with gradients, perform animations, transform shapes, and build a custom charting application from scratch. If you’re looking to learn more about using the HTML5 Canvas in your Web applications then this course will break down the learning curve and give you a great start!    Getting Started with the HTML5 Canvas Introduction HTML5 Canvas Usage Scenarios Demo: Game Demos Demo: Engaging Applications Demo: Charting HTML5 Canvas Fundamentals Hello World Demo Overview of the Canvas API Demo: Canvas API Documentation Summary    Drawing with the HTML5 Canvas Introduction Drawing Rectangles and Ellipses Demo: Simple Bar Chart Demo: Simple Bar Chart with Transforms Demo: Drawing Circles Demo: Using arcTo() Drawing Lines and Paths Demo: Drawing Lines Demo: Simple Line Chart Demo: Using bezierCurveTo() Demo: Using quadraticCurveTo() Drawing Text Demo: Filling, Stroking, and Measuring Text Demo: Using Canvas Transforms with Text Drawing Images Demo: Using Image Functions Drawing Videos Demo: Syncing Video with a Canvas Summary    Manipulating Pixels  Introduction Rendering Gradients Demo: Creating Linear Gradients Demo: Creating Radial Gradients Using Transforms Demo: Getting Started with Transform Functions Demo: Using transform() and and setTransform() Accessing Pixels Demo: Creating Pixels Dynamically Demo: Grayscale Pixels Animation Fundamentals Demo: Getting Started with Animation Demo: Using Gradients, Transforms, and Animations Summary    Building a Custom Data Chart Introduction Creating the CanvasChart Object Creating the CanvasChart Shell Code Rendering Text and Gradients Rendering Data Points Text and Guide Lines Connecting Data Point Lines Rendering Data Points Adding Animation Adding Overlays and Interactivity Summary     Related Courses:  

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • Drawing isometric map in canvas / javascript

    - by Dave
    I have a problem with my map design for my tiles. I set player position which is meant to be the middle tile that the canvas is looking at. How ever the calculation to put them in x:y pixel location is completely messed up for me and i don't know how to fix it. This is what i tried: var offset_x = 0; //used for scrolling on x var offset_y = 0; //used for scrolling on y var prev_mousex = 0; //for movePos function var prev_mousey = 0; //for movePos function function movePos(e){ if (prev_mousex === 0 && prev_mousey === 0) { prev_mousex = e.pageX; prev_mousey = e.pageY; } offset_x = offset_x + (e.pageX - prev_mousex); offset_y = offset_y + (e.pageY - prev_mousey); prev_mousex = e.pageX; prev_mousey = e.pageY; run = true; } player_posx = 5; player_posy = 55; ct = 19; for (i = (player_posx-ct); i < (player_posx+ct); i++){ //horizontal for (j=(player_posy-ct); j < (player_posy+ct); j++){ // vertical //img[0] is 64by64 but the graphic is 64by32 the rest is alpha space var x = (i-j)*(img[0].height/2) + (canvas.width/2)-(img[0].width/2); var y = (i+j)*(img[0].height/4); var abposx = x - offset_x; var abposy = y - offset_y; ctx.drawImage(img[0],abposx,abposy); } } Now based on these numbers the first render-able tile is I = 0 & J = 36. As numbers in the negative are not in the array. But for I=0 and J= 36 the position it calculates is : -1120 : 592 Does any one know how to center it to canvas view properly?

    Read the article

  • how to convert avi (xvid) to mkv or mp4 (h264)

    - by bcsteeve
    Very noob when it comes to video. I'm trying to make sense of what I"m finding via Google... but its mostly Greek to me. I have a bunch of Avi files that won't play in my WD TV Play box. Mediainfo tells me they are xvid. Specs for the box show that should be fine... but digging through forums says its hit-and-miss. So I'd like to try converting them to h264 encoded MKV or mp4 files. I gather avconv is the tool, but reading the manual just has me really really confused. I tried the very basic example of: avconv -i file.avi -c copy file.mp4 it took less than 4 seconds. And it worked... sort of. It "played" in that something came up on the screen... but there was horrible artifacting and scenes would just sort of melt into each other. I want to preserve quality if possible. I'm not concerned about file size. I'm not terribly concerned with the time it takes either, provided I can do them in a batch. Can someone familiar with the process please give me a command with the options? Thank you for your help. I'm posting the mediainfo in case it helps: General Complete name : \\SERVER\Video\Public\test.avi Format : AVI Format/Info : Audio Video Interleave File size : 189 MiB Duration : 11mn 18s Overall bit rate : 2 335 Kbps Writing application : Lavf52.32.0 Video ID : 0 Format : MPEG-4 Visual Format profile : Advanced Simple@L5 Format settings, BVOP : 2 Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Muxing mode : Packed bitstream Codec ID : XVID Codec ID/Hint : XviD Duration : 11mn 18s Bit rate : 2 129 Kbps Width : 720 pixels Height : 480 pixels Display aspect ratio : 16:9 Frame rate : 29.970 fps Standard : NTSC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.206 Stream size : 172 MiB (91%) Writing library : XviD 1.2.1 (UTC 2008-12-04) Audio ID : 1 Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Mode extension : MS Stereo Codec ID : 55 Codec ID/Hint : MP3 Duration : 11mn 18s Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Stream size : 15.5 MiB (8%) Alignment : Aligned on interleaves Interleave, duration : 24 ms (0.72 video frame)

    Read the article

  • Precise Touch Screen Dragging Issue: Trouble Aligning with the Finger due to Different Screen Resolution

    - by David Dimalanta
    Please, I need your help. I'm trying to make a game that will drag-n-drop a sprite/image while my finger follows precisely with the image without being offset. When I'm trying on a 900x1280 (in X [900] and Y [1280]) screen resolution of the Google Nexus 7 tablet, it follows precisely. However, if I try testing on a phone smaller than 900x1280, my finger and the image won't aligned properly and correctly except it still dragging. This is the code I used for making a sprite dragging with my finger under touchDragged(): x = ((screenX + Gdx.input.getX())/2) - (fruit.width/2); y = ((camera_2.viewportHeight * multiplier) - ((screenY + Gdx.input.getY())/2) - (fruit.width/2)); This code above will make the finger and the image/sprite stays together in place while dragging but only works on 900x1280. You'll be wondering there's camera_2.viewportHeight in my code. Here are for two reasons: to prevent inverted drag (e.g. when you swipe with your finger downwards, the sprite moves upward instead) and baseline for reading coordinate...I think. Now when I'm adding another orthographic camera named camera_1 and changing its setting, I recently used it for adjusting the falling object by meter per pixel. Also, it seems effective independently for smartphones that has smaller resolution and this is what I used here: show() camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 280; // --> I set it to a smaller view port height so that the object would fall faster, decreasing the chance of drag force. camera_1.viewportWidth = 196; // --> Make it proportion to the original screen view size as possible. camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); touchDragged() x = ((screenX + (camera_1.viewportWidth/Gdx.input.getX()))/2) - (fruit.width/2); y = ((camera_1.viewportHeight * multiplier) - ((screenY + (camera_1.viewportHeight/Gdx.input.getY()))/2) - (fruit.width/2)); But the result instead of just following the image/sprite closely to my finger, it still has a space/gap between the sprite/image and the finger. It is possibly dependent on coordinates based on the screen resolution. I'm trying to drag the blueberry sprite with my finger. My expectation did not met since I want my finger and the sprite/image (blueberry) to stay close together while dragging until I release it. Here's what it looks like: I got to figure it out how to make independent on all screen sizes by just following the image/sprite closely to my finger while dragging even on most different screen sizes instead.

    Read the article

  • Correct way to handle path-finding collision matrix

    - by Xander Lamkins
    Here is an example of me utilizing path finding. The red grid represents the grid utilized by my A* library to locate a distance. This picture is only an example, currently it is all calculated on the 1x1 pixel level (pretty darn laggy). I want to make it so that the farther I click, the less accurate it will be (split the map into larger grid pieces). Edit: as mentioned by Eric, this is not a required game mechanic. I am perfectly fine with any method that allows me to make this accurate while still fast. This isn't the really the topic of this question though. The problem I have is, my current library uses a two dimensional grid of integers. The higher the number in a cell, the more resistance for that grid tile. Currently I'm setting all unwalkable spots to Integer Max. Here is an example of what I want: I'm just not sure how I should set up the arrays of integers of the grid. Every time an element is added/removed to/from the game, it's collision details are updated in the table. Here is a picture of what the map looks like on my collision layer: I probably shouldn't be creating new arrays every time I have to do a path find because my game needs to support tons of PF at the same time. Should I have multiple arrays that are all updated when the dynamic elements are updated (a building is built/a building is destroyed). The problem I see with this is that it will probably make the creation and destruction of buildings a little more laggy than I would want because it would be setting the collision grid for each built in accuracy level. I would also have to add more/remove some arrays if I ever in the future changed the map size. Should I generate the new array based on an accuracy value every time I need to PF? The problem I see with this is that it will probably make any form of PF just as laggy because it will have to search through a MapWidth x MapHeight number of cells to shrink it all down. Or is there a better way? I'm certainly not the best at optimizing really anything. I've just started dealing with XNA so I'm not used to having optimization code really doing much of an affect until now... :( If you need code examples, please ask. I'll add it as an edit. EDIT: While this doesn't directly relate to the question, I figure the more information I provide, the better. To keep your units from moving as accurately to the players desired position, I've decided that once the unit PFs over to the less accurate grid piece, it will then PF on a more accurate level to the exact position requested.

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • 2D game - Missile shooting problem on Android

    - by Niksa
    Hello, I have to make a tank that sits still but moves his turret and shoots missiles. As this is my first Android application ever and I haven't done any game development either, I've come across a few problems... Now, I did the tank and the moving turret once I read the Android tutorial for the LunarLander sample code. So this code is based on the LunarLander code. But I'm having trouble doing the missile firing then SPACE button is being pressed. private void doDraw(Canvas canvas) { canvas.drawBitmap(backgroundImage, 0, 0, null); // draws the tank canvas.drawBitmap(tank, x_tank, y_tank, new Paint()); // draws and rotates the tank turret canvas.rotate((float) mHeading, (float) x_turret + mTurretWidth, y_turret); canvas.drawBitmap(turret, x_turret, y_turret, new Paint()); // draws the grenade that is a regular circle from ShapeDrawable class bullet.setBounds(x_bullet, y_bullet, x_bullet + width, y_bullet + height); bullet.draw(canvas); } UPDATE GAME method private void updateGame() throws InterruptedException { long now = System.currentTimeMillis(); if (mLastTime > now) return; double elapsed = (now - mLastTime) / 1000.0; mLastTime = now; // dUP and dDown, rotates the turret from 0 to 75 degrees. if (dUp) mHeading += 1 * (PHYS_SLEW_SEC * elapsed); if (mHeading >= 75) mHeading = 75; if (dDown) mHeading += (-1) * (PHYS_SLEW_SEC * elapsed); if (mHeading < 0) mHeading = 0; if (dSpace){ // missile Logic, a straight trajectorie for now x_bullet -= 1; y_bullet -= 1; //doesn't work, has to be updated every pixel or what? } boolean doKeyDown(int keyCode, KeyEvent msg) { boolean handled = false; synchronized (mSurfaceHolder) { if (keyCode == KeyEvent.KEYCODE_SPACE){ dSpace = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_UP){ dUp = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_DOWN){ dDown = true; handled = true; } return handled; } } } a method run that runs the game... public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas(null); synchronized (mSurfaceHolder) { if (mMode == STATE_RUNNING) updateGame(); doDraw(c); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } So the question would be, how do I make that the bullet is fired on a single SPACE key press from the turret to the end of the screen? Could you help me out here, I seem to be in the dark here... Thanks, Niksa

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • How to debug a native Java crash on Linux?

    - by Paul J. Lucas
    I've seen this question and this article on how to debug a native Java crash. The article is with respect to Windows. What are the equivalent debugging aids on Linux? Note: All I have is this crash log from a user in the field. I do not have access to the machine on which the crash occurred. Update: I am pretty sure the crash is due to JNI code we have. I never meant to imply that it was the JVM itself that was faulty. Per request, here is the crash dump (or as much of it as will fit in the 30K stackoverflow limit): # # An unexpected error has been detected by Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x06300e76, pid=9983, tid=4106996592 # # Java VM: Java HotSpot(TM) Client VM (1.6.0_03-b05 mixed mode, sharing) # Problematic frame: # V [libjvm.so+0x300e76] # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x0922e000): VMThread [id=9985] siginfo:si_signo=11, si_errno=0, si_code=1, si_addr=0x00000008 Registers: EAX=0x00000008, EBX=0x88a829b3, ECX=0x88a829b0, EDX=0xa7d6c1dc ESP=0xf4cbba5c, EBP=0xf4cbba68, ESI=0xa7d6d1d8, EDI=0x00000404 EIP=0x06300e76, CR2=0x00000008, EFLAGS=0x00010202 Top of Stack: (sp=0xf4cbba5c) 0xf4cbba5c: a7d6c1c8 0920cc30 aa0de5c0 f4cbba98 0xf4cbba6c: 063517d7 cf8f2a20 a7d6c1c8 0920cc30 0xf4cbba7c: 0920cc30 00000000 00000000 6d224c40 0xf4cbba8c: 00000001 f4cbbbb0 0920b440 f4cbbab8 0xf4cbba9c: 061dd4df 0920cc30 f4cbbb10 f4cbbac8 0xf4cbbaac: 0633cb7e 0643b5b8 f4492968 f4cbbad8 0xf4cbbabc: 061dcd68 f4cbbaf0 0920cc30 f4cbbaf8 0xf4cbbacc: 061df31e f4cbbb10 d4cbcc2c f4cbbb08 Instructions: (pc=0x06300e76) 0x06300e66: 82 39 f2 73 34 90 8d 74 26 00 8b 02 85 c0 74 22 0x06300e76: 8b 18 80 3d 45 10 42 06 00 74 0c 89 d8 31 c9 83 Stack: [0xf4c3c000,0xf4cbd000), sp=0xf4cbba5c, free space=510k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x300e76] V [libjvm.so+0x3517d7] V [libjvm.so+0x1dd4df] V [libjvm.so+0x1dcd68] V [libjvm.so+0x1dc3cc] V [libjvm.so+0x1d4c52] V [libjvm.so+0x1d32cc] V [libjvm.so+0x1d4229] V [libjvm.so+0x1dc82a] V [libjvm.so+0x1d1d34] V [libjvm.so+0x186125] V [libjvm.so+0x1d20bc] V [libjvm.so+0x3b2cbe] V [libjvm.so+0x3c5037] V [libjvm.so+0x3c46bc] V [libjvm.so+0x3c488a] V [libjvm.so+0x3c446f] V [libjvm.so+0x30b719] C [libpthread.so.0+0x5cb2] VM_Operation (0xf2b60728): generation collection for allocation, mode: safepoint, requested by thread 0x09449c00 --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x092afc00 JavaThread "RawImageCache" daemon [_thread_blocked, id=10026] 0xf37d1000 JavaThread "TimerQueue" daemon [_thread_blocked, id=10022] 0x09410000 JavaThread "SunTileScheduler0Standard7" daemon [_thread_blocked, id=10021] 0x0940f000 JavaThread "SunTileScheduler0Standard6" daemon [_thread_blocked, id=10020] 0x0946fc00 JavaThread "SunTileScheduler0Standard5" daemon [_thread_blocked, id=10019] 0x0946e800 JavaThread "SunTileScheduler0Standard4" daemon [_thread_blocked, id=10018] 0x0946d400 JavaThread "SunTileScheduler0Standard3" daemon [_thread_blocked, id=10017] 0x0946c000 JavaThread "SunTileScheduler0Standard2" daemon [_thread_blocked, id=10016] 0x0946ac00 JavaThread "SunTileScheduler0Standard1" daemon [_thread_blocked, id=10015] 0x0946a000 JavaThread "SunTileScheduler0Standard0" daemon [_thread_blocked, id=10014] 0x0944a800 JavaThread "Image List Poller" [_thread_blocked, id=10012] 0x09449c00 JavaThread "Image Task Queue" [_thread_blocked, id=10011] 0xf37e3c00 JavaThread "Laf-Widget fade tracker" [_thread_blocked, id=10010] 0x094abc00 JavaThread "FileCacheMonitor" daemon [_thread_blocked, id=10009] 0xf37e3800 JavaThread "DestroyJavaVM" [_thread_blocked, id=9984] 0xf37ee400 JavaThread "Thread-6" daemon [_thread_blocked, id=10006] 0xf3a7c800 JavaThread "DirectoryMonitor.MonitorThread" daemon [_thread_blocked, id=10005] 0xf3a73800 JavaThread "AWT Watchdog" daemon [_thread_blocked, id=10004] 0xf3adb800 JavaThread "TileReaper" daemon [_thread_blocked, id=10003] 0x093c3c00 JavaThread "process reaper" daemon [_thread_in_native, id=10001] 0x093ac800 JavaThread "Timer-0" daemon [_thread_blocked, id=9999] 0x093a8c00 JavaThread "AWT-EventQueue-0" [_thread_blocked, id=9997] 0x093a8000 JavaThread "AWT-Shutdown" [_thread_blocked, id=9996] 0x09378c00 JavaThread "AWT-XAWT" daemon [_thread_blocked, id=9994] 0x09368400 JavaThread "Java2D Disposer" daemon [_thread_blocked, id=9993] 0x09350000 JavaThread "Thread-1" daemon [_thread_blocked, id=9992] 0x0923b400 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=9990] 0x09239c00 JavaThread "CompilerThread0" daemon [_thread_blocked, id=9989] 0x09238800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=9988] 0x09230800 JavaThread "Finalizer" daemon [_thread_blocked, id=9987] 0x0922f400 JavaThread "Reference Handler" daemon [_thread_blocked, id=9986] Other Threads: =0x0922e000 VMThread [id=9985] 0x09245000 WatcherThread [id=9991] VM state:at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event]) [0x09205178/0x092051a0] Threads_lock - owner thread: 0x0922e000 [0x09205638/0x09205650] Heap_lock - owner thread: 0x09449c00 Heap def new generation total 83968K, used 9280K [0x55600000, 0x5b110000, 0x5ec40000) eden space 74688K, 0% used [0x55600000, 0x55600000, 0x59ef0000) from space 9280K, 100% used [0x5a800000, 0x5b110000, 0x5b110000) to space 9280K, 0% used [0x59ef0000, 0x59ef0000, 0x5a800000) tenured generation total 1233640K, used 1233529K [0x5ec40000, 0xaa0fa000, 0xcf800000) the space 1233640K, 99% used [0x5ec40000, 0xaa0de5c0, 0x8b4af400, 0xaa0fa000) compacting perm gen total 13312K, used 13175K [0xcf800000, 0xd0500000, 0xd3800000) the space 13312K, 98% used [0xcf800000, 0xd04ddd70, 0xd04dde00, 0xd0500000) ro space 8192K, 69% used [0xd3800000, 0xd3d8f608, 0xd3d8f800, 0xd4000000) rw space 12288K, 57% used [0xd4000000, 0xd46eee98, 0xd46ef000, 0xd4c00000) Dynamic libraries: [ snip ] VM Arguments: jvm_args: -Dinstall4j.jvmDir=/home/berbmit/bin/LightZone/jre -Dinstall4j.appDir=/home/berbmit/bin/LightZone -Dexe4j.moduleName=/home/berbmit/bin/LightZone/LightZone -Dcom.lightcrafts.licensetype=ESD -Xmx2000000k java_command: com.install4j.runtime.Launcher launch com.lightcrafts.platform.linux.LinuxLauncher true false /home/berbmit/bin/LightZone/LightZone.log /home/berbmit/bin/LightZone/LightZone.log false true false true true -1 -1 20 20 Arial 0,0,0 8 500 20 40 Arial 0,0,0 8 500 -1 Launcher Type: SUN_STANDARD Environment Variables: PATH=/home/berbmit/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=berbmit LD_LIBRARY_PATH=/home/berbmit/bin/LightZone/jre/lib/i386/client:/home/berbmit/bin/LightZone/jre/lib/i386:/home/berbmit/bin/LightZone/jre/../lib/i386:/home/berbmit/bin/LightZone/.: SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: SIGSEGV: [libjvm.so+0x3b29c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x3b29c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x309ec0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGILL: [libjvm.so+0x309ec0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x30bef0], sa_mask[0]=0x00000000, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR2: [libjvm.so+0x30bef0], sa_mask[0]=0x00000000, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:squeeze/sid uname:Linux 2.6.35-23-generic #41-Ubuntu SMP Wed Nov 24 11:55:36 UTC 2010 x86_64 libc:glibc 2.12.1 NPTL 2.12.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:0.67 0.54 0.36 CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 10 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, ht Memory: 4k page, physical 8191552k(3359308k free), swap 1016828k(1016828k free) vm_info: Java HotSpot(TM) Client VM (1.6.0_03-b05) for linux-x86, built on Sep 24 2007 22:45:46 by "java_re" with gcc 3.2.1-7a (J2SE release)

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >