Search Results

Search found 951 results on 39 pages for 'surface'.

Page 13/39 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How do I get the co-ordinates of a mouse click on a transformed WPF control?

    - by BlackWasp
    I'm playing with a simple WPF application. One part of it includes a grid containing several controls. The grid is rotated using a LayoutTransform and a RotateTransform. I need to get the co-ordinates of a mouse-click relative to the top-left of the grid, taking the rotation into account. To be clear, let's say I have a single drawing surface within the grid and no transform had been applied. I then click at location X = 20, Y = 10 and put a dot on the drawing surface at that point. If the grid is now rotated by 30 degrees and I click on the dot (which is also moved by the rotation), the click position should still be X = 20, Y = 10.

    Read the article

  • Drag and drop an image from desktop to a web text editor (implementation in javascript)

    - by fatmatto
    I tried to write reasonably short title but i failed i guess.. Hi everybody here's what i'm trying to do: I want to implement a web text editor able to recognize when the user drag a image file over it's editing surface and it automa(gically) starts the upload and insert the image near the cursor position. In other words i don't want the user to do the usual "insert-image-browse-ok". Atm i am not very good at javascript ... i know JQuery but i have not a clear idea about how to implement this... i don't know if there's an event handler able to help me in this situation... if not then there should be i think or web apps would miss some kind of interactivity. I've heard miracles about HTML5 could it help me? I've seen such things in Google Wave but that surface doesn't seem to be a form field... google lab's black magic i guess.... Thank you in advance.

    Read the article

  • Merging photo textures - (from calibrated cameras) - projected onto geometry

    - by freakTheMighty
    I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry. I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness. The question is how do these things get combined and are there well established existing solutions?

    Read the article

  • How to color a mesh with values at the vertices in WPF 3D?

    - by Christo
    We've got a sphere which we want to display in 3D and color given a fuction that depends on spherical coordinates. The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface. The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D. With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid anymore. Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?

    Read the article

  • C++ SDL State Machine Segfault

    - by user1602079
    The code compiles and builds fine, but it immediately segfaults. I've looked at this for a while and have no idea why. Any help is appreciated. Thank you! Here's the code: main.cpp #include "SDL/SDL.h" #include "Globals.h" #include "Core.h" #include "GameStates.h" #include "Introduction.h" int main(int argc, char** args) { if(core.Initilize() == false) { SDL_Quit(); } while(core.desiredstate != core.Quit) { currentstate->EventHandling(); currentstate->Logic(); core.ChangeState(); currentstate->Render(); currentstate->Update(); } SDL_Quit(); } Core.h #ifndef CORE_H #define CORE_H #include "SDL/SDL.h" #include <string> class Core { public: SDL_Surface* Load(std::string filename); void ApplySurface(int X, int Y, SDL_Surface* source, SDL_Surface* destination); void SetState(int newstate); void ChangeState(); enum state { Intro, STATES_NULL, Quit }; int desiredstate, stateID; bool Initilize(); }; #endif Core.cpp #include "Core.h" #include "SDL/SDL.h" #include "Globals.h" #include "Introduction.h" #include <string> /* Initilizes SDL subsystems */ bool Core::Initilize() { //Inits subsystems, reutrns false upon error if(SDL_Init(SDL_INIT_EVERYTHING) == -1) { return false; } SDL_WM_SetCaption("Game", NULL); return true; } /* Loads surfaces and optimizes them */ SDL_Surface* Core::Load(std::string filename) { //The surface to be optimized SDL_Surface* original = SDL_LoadBMP(filename.c_str()); //The optimized surface SDL_Surface* optimized = NULL; //Optimizes the image if it loaded properly if(original != NULL) { optimized = SDL_DisplayFormat(original); SDL_FreeSurface(original); } else { //returns NULL upon error return NULL; } return optimized; } /* Blits surfaces */ void Core::ApplySurface(int X, int Y, SDL_Surface* source, SDL_Surface* destination) { //Stores the coordinates of the surface SDL_Rect offsets; offsets.x = X; offsets.y = Y; //Bits the surface if both surfaces are present if(source != NULL && destination != NULL) { SDL_BlitSurface(source, NULL, destination, &offsets); } } /* Sets desiredstate to newstate */ void Core::SetState(int newstate) { if(desiredstate != Quit) { desiredstate = newstate; } } /* Changes the game state */ void Core::ChangeState() { if(desiredstate != STATES_NULL && desiredstate != Quit) { delete currentstate; switch(desiredstate) { case Intro: currentstate = new Introduction(); break; } stateID = desiredstate; desiredstate = core.STATES_NULL; } } Globals.h #ifndef GLOBALS_H #define GLOBALS_H #include "SDL/SDL.h" #include "Core.h" #include "GameStates.h" extern SDL_Surface* screen; extern Core core; extern GameStates* currentstate; #endif Globals.cpp #include "Globals.h" #include "SDL/SDL.h" #include "GameStates.h" SDL_Surface* screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); Core core; GameStates* currentstate = NULL; GameStates.h #ifndef GAMESTATES_H #define GAMESTATES_H class GameStates { public: virtual void EventHandling() = 0; virtual void Logic() = 0; virtual void Render() = 0; virtual void Update() = 0; }; #endif Introduction.h #ifndef INTRODUCTION_H #define INTRODUCTION_H #include "GameStates.h" #include "Globals.h" class Introduction : public GameStates { public: Introduction(); private: void EventHandling(); void Logic(); void Render(); void Update(); ~Introduction(); SDL_Surface* test; }; #endif Introduction.cpp #include "SDL/SDL.h" #include "Core.h" #include "Globals.h" #include "Introduction.h" /* Loads all the assets */ Introduction::Introduction() { test = core.Load("test.bmp"); } void Introduction::EventHandling() { SDL_Event event; while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: core.SetState(core.Quit); break; } } } void Introduction::Logic() { //to be coded } void Introduction::Render() { core.ApplySurface(30, 30, test, screen); } void Introduction::Update() { SDL_Flip(screen); } Introduction::~Introduction() { SDL_FreeSurface(test); } Sorry if the formatting is a bit off... Having to put four spaces for it to be put into a code block offset it a bit. I ran it through gdb and this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0000000000400e46 in main () Which isn't incredibly useful... Any help is appreciated. Thank you!

    Read the article

  • Generating data on unlevel background

    - by bluejay93
    I want to make an unlevel background and then generate some test data on that using Matlab. I was not clear when I asked this question earlier. So for this simple example for i = 1:10 for j = 1:10 f(i,j)=X.^2 + Y.^2 end end where X and Y have been already defined, it plots it on a flat surface. I don't want to distort the function itself, but I want the surface that it goes onto to be unlevel, changed by some degree or something. I hope that's a little clearer.

    Read the article

  • Proliant server will not accept new hard disks in RAID 1+0?

    - by Leigh
    I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (68.5 GB, RAID 1, Interim Recovery Mode) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 73.5 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, Failed array B (SAS, Unused Space: 0 MB) logicaldrive 2 (558.7 GB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 300 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 300 GB, OK) ctrl all show config detail Smart Array P400 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PAFGK0P9VWO0UQ Cache Serial Number: PA82C0J9VWL8I7 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: E Firmware Version: 7.24 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Status Details: A cache error was detected. Run more information. Cache Ratio: 100% Read / 0% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Battery/Capacitor Count: 0 SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: Failed Physical Drive Array Type: Data One of the drives on this array have failed or has Logical Drive: 1 Size: 68.5 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 17594 Strip Size: 128 KB Full Stripe Size: 128 KB Status: Interim Recovery Mode Caching: Enabled Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive0 Mount Points: C:\ 68.5 GB Logical Drive Label: A0100539PAFGK0P9VWO0UQ0E93 Mirror Group 0: physicaldrive 2I:1:2 (port 2I:box 1:bay 2, S Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, S Drive Type: Data physicaldrive 2I:1:1 Port: 2I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 73.5 GB Rotational Speed: 10000 Firmware Revision: 0103 Serial Number: B379P8C006RK Model: HP DG072A9B7 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:2 Port: 2I Box: 1 Bay: 2 Status: Failed Drive Type: Data Drive Interface Type: SAS Size: 72 GB Rotational Speed: 15000 Firmware Revision: HPD9 Serial Number: D5A1PCA04SL01244 Model: HP EH0072FARUA PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data Logical Drive: 2 Size: 558.7 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Co Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive1 Mount Points: E:\ 558.7 GB Logical Drive Label: AF14FD12PAFGK0P9VWO0UQD007 Drive Type: Data physicaldrive 1I:1:5 Port: 1I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE07QH300009923X1X3 Model: HP DG0300BALVP Current Temperature (C): 32 Maximum Temperature (C): 45 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:3 Port: 2I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE0AHVH00009924P8F3 Model: HP DG0300BALVP Current Temperature (C): 34 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:4 Port: 2I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE08NAK00009924KWD6 Model: HP DG0300BALVP Current Temperature (C): 35 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Adding an expression based image in a client report definition file (RDLC)

    - by rajbk
    In previous posts, I showed you how to create a report using Visual Studio 2010 and how to add a hyperlink to the report.  In this post, I show you how to add an expression based image to each row of the report. This similar to displaying a checkbox column for Boolean values.  A sample project is attached to the bottom of this post. To start off, download the project we created earlier from here.  The report we created had a “Discontinued” column of type Boolean. We are going to change it to display an “available” icon or “unavailable” icon based on the “Discontinued” row value.    Load the project and double click on Products.rdlc. With the report design surface active, you will see the “Report Data” tool window. Right click on the Images folder and select “Add Image..”   Add the available_icon.png and discontinued_icon.png images (the sample project at the end of this post has the icon png files)    You can see the images we added in the “Report Data” tool window.   Drag and drop the available_icon into the “Discontinued” column row (not the header) We get a dialog box which allows us to set the image properties. We will add an expression that specifies the image to display based the “Discontinued” value from the Product table. Click on the expression (fx) button.   Add the following expression : = IIf(Fields!Discontinued.Value = True, “discontinued_icon”, “available_icon”)   Save and exit all dialog boxes. In the report design surface, resize the column header and change the text from “Discontinued” to “In Production”.   (Optional) Right click on the image cell (not header) , go to “Image Properties..” and offset it by 5pt from the left. (Optional) Change the border color since it is not set by default for image columns. We are done adding our image column! Compile the application and run it. You will see that the “In Production” column has red ‘x’ icons for discontinued products. Download the VS 2010 sample project NorthwindReportsImage.zip Other Posts Adding a hyperlink in a client report definition file (RDLC) Rendering an RDLC directly to the Response stream in ASP.NET MVC ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager Localization in ASP.NET MVC 2 using ModelMetadata Setting up Visual Studio 2010 to step into Microsoft .NET Source Code Running ASP.NET Webforms and ASP.NET MVC side by side Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework

    Read the article

  • Interpolating height on opengl quad

    - by ThePlague
    Okay I've got a quad, see picture. And want to find the height at any point in the quad, is there a way to do this accurately as bilinear interpolation doesn't work in this case. As you can see the trees trunk is well beneath the terrain and should be at the same height as the trunks around it. I know the altitude of the vertices marked by a red circle and the centre red circle is roughly where the base should be, I've also crudely added a line as that surface is not flat.

    Read the article

  • Server Core: Best Practice for Applications on Windows Server

    - by The Official Microsoft IIS Site
    I have been talking with a number of customers, CSOs, CIOs and industry professionals over the past few weeks and I realized that the availability and benefits of using the Server Core option of Windows Server 2008 or Windows Server 2008 R2 was not as widely known as I think it should be. Windows Server Core provides a minimal installation environment for running specific server roles, which reduces the maintenance and management requirements and the attack surface for those server roles. The following...(read more)

    Read the article

  • Segfault when iterating over a map<string, string> and drawing its contents using SDL_TTF

    - by Michael Stahre
    I'm not entirely sure this question belongs on gamedev.stackexchange, but I'm technically working on a game and working with SDL, so it might not be entirely offtopic. I've written a class called DebugText. The point of the class is to have a nice way of printing values of variables to the game screen. The idea is to call SetDebugText() with the variables in question every time they change or, as is currently the case, every time the game's Update() is called. The issue is that when iterating over the map that contains my variables and their latest updated values, I get segfaults. See the comments in DrawDebugText() below, it specifies where the error happens. I've tried splitting the calls to it-first and it-second into separate lines and found that the problem doesn't always happen when calling it-first. It alters between it-first and it-second. I can't find a pattern. It doesn't fail on every call to DrawDebugText() either. It might fail on the third time DrawDebugText() is called, or it might fail on the fourth. Class header: #ifndef CLIENT_DEBUGTEXT_H #define CLIENT_DEBUGTEXT_H #include <Map> #include <Math.h> #include <sstream> #include <SDL.h> #include <SDL_ttf.h> #include "vector2.h" using std::string; using std::stringstream; using std::map; using std::pair; using game::Vector2; namespace game { class DebugText { private: TTF_Font* debug_text_font; map<string, string>* debug_text_list; public: void SetDebugText(string var, bool value); void SetDebugText(string var, float value); void SetDebugText(string var, int value); void SetDebugText(string var, Vector2 value); void SetDebugText(string var, string value); int DrawDebugText(SDL_Surface*, SDL_Rect*); void InitDebugText(); void Clear(); }; } #endif Class source file: #include "debugtext.h" namespace game { // Copypasta function for handling the toString conversion template <class T> inline string to_string (const T& t) { stringstream ss (stringstream::in | stringstream::out); ss << t; return ss.str(); } // Initializes SDL_TTF and sets its font void DebugText::InitDebugText() { if(TTF_WasInit()) TTF_Quit(); TTF_Init(); debug_text_font = TTF_OpenFont("LiberationSans-Regular.ttf", 16); TTF_SetFontStyle(debug_text_font, TTF_STYLE_NORMAL); } // Iterates over the current debug_text_list and draws every element on the screen. // After drawing with SDL you need to get a rect specifying the area on the screen that was changed and tell SDL that this part of the screen needs to be updated. this is done in the game's Draw() function // This function sets rects_to_update to the new list of rects provided by all of the surfaces and returns the number of rects in the list. These two parameters are used in Draw() when calling on SDL_UpdateRects(), which takes an SDL_Rect* and a list length int DebugText::DrawDebugText(SDL_Surface* screen, SDL_Rect* rects_to_update) { if(debug_text_list == NULL) return 0; if(!TTF_WasInit()) InitDebugText(); rects_to_update = NULL; // Specifying the font color SDL_Color font_color = {0xff, 0x00, 0x00, 0x00}; // r, g, b, unused int row_count = 0; string line; // The iterator variable map<string, string>::iterator it; // Gets the iterator and iterates over it for(it = debug_text_list->begin(); it != debug_text_list->end(); it++) { // Takes the first value (the name of the variable) and the second value (the value of the parameter in string form) //---------THIS LINE GIVES ME SEGFAULTS----- line = it->first + ": " + it->second; //------------------------------------------ // Creates a surface with the text on it that in turn can be rendered to the screen itself later SDL_Surface* debug_surface = TTF_RenderText_Solid(debug_text_font, line.c_str(), font_color); if(debug_surface == NULL) { // A standard check for errors fprintf(stderr, "Error: %s", TTF_GetError()); return NULL; } else { // If SDL_TTF did its job right, then we now set a destination rect row_count++; SDL_Rect dstrect = {5, 5, 0, 0}; // x, y, w, h dstrect.x = 20; dstrect.y = 20*row_count; // Draws the surface with the text on it to the screen int res = SDL_BlitSurface(debug_surface,NULL,screen,&dstrect); if(res != 0) { //Just an error check fprintf(stderr, "Error: %s", SDL_GetError()); return NULL; } // Creates a new rect to specify the area that needs to be updated with SDL_Rect* new_rect_to_update = (SDL_Rect*) malloc(sizeof(SDL_Rect)); new_rect_to_update->h = debug_surface->h; new_rect_to_update->w = debug_surface->w; new_rect_to_update->x = dstrect.x; new_rect_to_update->y = dstrect.y; // Just freeing the surface since it isn't necessary anymore SDL_FreeSurface(debug_surface); // Creates a new list of rects with room for the new rect SDL_Rect* newtemp = (SDL_Rect*) malloc(row_count*sizeof(SDL_Rect)); // Copies the data from the old list of rects to the new one memcpy(newtemp, rects_to_update, (row_count-1)*sizeof(SDL_Rect)); // Adds the new rect to the new list newtemp[row_count-1] = *new_rect_to_update; // Frees the memory used by the old list free(rects_to_update); // And finally redirects the pointer to the old list to the new list rects_to_update = newtemp; newtemp = NULL; } } // When the entire map has been iterated over, return the number of lines that were drawn, ie. the number of rects in the returned rect list return row_count; } // The SetDebugText used by all the SetDebugText overloads // Takes two strings, inserts them into the map as a pair void DebugText::SetDebugText(string var, string value) { if (debug_text_list == NULL) { debug_text_list = new map<string, string>(); } debug_text_list->erase(var); debug_text_list->insert(pair<string, string>(var, value)); } // Writes the bool to a string and calls SetDebugText(string, string) void DebugText::SetDebugText(string var, bool value) { string result; if (value) result = "True"; else result = "False"; SetDebugText(var, result); } // Does the same thing, but uses to_string() to convert the float void DebugText::SetDebugText(string var, float value) { SetDebugText(var, to_string(value)); } // Same as above, but int void DebugText::SetDebugText(string var, int value) { SetDebugText(var, to_string(value)); } // Vector2 is a struct of my own making. It contains the two float vars x and y void DebugText::SetDebugText(string var, Vector2 value) { SetDebugText(var + ".x", to_string(value.x)); SetDebugText(var + ".y", to_string(value.y)); } // Empties the list. I don't actually use this in my code. Shame on me for writing something I don't use. void DebugText::Clear() { if(debug_text_list != NULL) debug_text_list->clear(); } }

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by J Swaroop
    Re-posting Bruce Tierney's original post - albeit a day late: I reckon this is Oracle's most interesting launch this year. Enjoy! The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by Bruce Tierney
    The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • DIY $2 Flash Diffuser [Video]

    - by Jason Fitzpatrick
    This simple flash diffuser is cheap to make, easy to assemble, and offers a wide surface area to bounce your flash. The video comes to us courtesy of StepByStep Photography, building off a design by Chuck Gardner (follow the link below to read Chuck’s full tutorial and learn a bit about using a bounce flash). DIY Diffuser for Hot-Shoe Flash [via DIY Photography] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Excellent Windows Azure benchmarks

    - by Sarang
    The Extreme computing group has released a fairly comprehensive set of benchmarks  for almost all aspects of WA. They have also provided the source code to alleviate all doubts that may surface with the MSFT logo lurking around the top right of their homepage :) (Which also resides at a cloudapp.net url). The code is simple and the tests comprehensive enough to hold as data points for customer interactions. Add to it the clean no nonsense Silverlight charts to render the benchmarks and you are set to sell. Technorati Tags: Azure,Benchmark,Extreme Computing Group

    Read the article

  • Cracking open five of the best open source easter eggs

    <b>ars Technica:</b> "A number of humorous yet undocumented features are hiding beneath the surface of some of the most popular open source software applications. Although easter eggs are generally easy to spot when you can look at an application's source code, there are a few that aren't widely known."

    Read the article

  • My mouse pointer disappears on "system surfaces" , like background picture, menus, background for all kind of system menus

    - by Siegfried
    I have an annoying problem with Ubuntu 11.10, my mouse pointer disappears on the Ubuntu surface ( not in programs like firefox, libre office ..) The mouse pointer is there, but its size is only one pixel and in the white background of e.g. "system" it is not seen at all, although with "ctrl" I can see where it should be. I have not found any place where I can change the size or colors of the mouse pointer. Can anybody help me?

    Read the article

  • A community of FOSS lawyers?

    <b>Opensource.com:</b> "There is a fairly common perception among FOSS hackers that there is no community of FOSS lawyers. Scratch the surface, though, and it turns out that- despite our handicaps- the FOSS legal community is there and growing."

    Read the article

  • Titan – The Mystery of the Missing Waves

    - by Akemi Iwaya
    The moon Titan holds a unique distinction…it is the only other known ‘object’ in our solar system with a dense atmosphere and stable bodies of surface liquid. Unlike Earth though, there are no waves on Titan. This mystery has planetary scientists baffled and on a quest to understand this incredible phenomenon. Titan: The Mystery of the Missing Waves [YouTube]     

    Read the article

  • SCOM, 90 Days In, II. Noise.

    - by merrillaldrich
    Once you get past the basic architecture of a SCOM implementation, and build the servers, and so on, the first real problem is … well, noise. Suddenly (depending on how you deploy) the system will reach out, like marching army ants or a some very clever cybernetic spider and find, and then proceed to yell at you about, every single problem on every server you didn’t know you had. That, of course, is the point. Still, a tool like this is not useful if it doesn’t surface the real problems from the...(read more)

    Read the article

  • 3D picking lwjgl

    - by Wirde
    I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.) I posted this at stackoverflow at first but after researching some more in to my problem i found this neat site and tought that you guys might be more qualified to answer this question. This is how the code looks like: if(Mouse.getEventButton() == 1) { if (!Mouse.getEventButtonState()) { Camera.get().generateViewMatrix(); float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio(); float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f)); float displacementRate = (float)Math.tan(Camera.get().getFovy()/2); screenSpaceX *= displacementRate; screenSpaceY *= displacementRate; Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1); Matrix4f tmpView = new Matrix4f(); Camera.get().getViewMatrix().transpose(tmpView); Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar); Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); Ray clickRay = new Ray(rayPosition, rayDirection); Vector tMin = new Vector(), tMax = new Vector(), tempPoint; float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f; Drawable closestDrawableHit = null; for(Drawable d : this.worldModel.getDrawableThings()) { // Calcualte AABB for each object... needs to be moved later... firstVertex = true; for(Surface surface : d.getSurfaces()) { for(Vertex v : surface.getVertices()) { worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x; worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y; worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z; worldPosition = worldPosition.rotate(d.getRotation()); if (firstVertex) { maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z; minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z; firstVertex = false; } else { if (worldPosition.x > maxX) { maxX = worldPosition.x; } if (worldPosition.x < minX) { minX = worldPosition.x; } if (worldPosition.y > maxY) { maxY = worldPosition.y; } if (worldPosition.y < minY) { minY = worldPosition.y; } if (worldPosition.z > maxZ) { maxZ = worldPosition.z; } if (worldPosition.z < minZ) { minZ = worldPosition.z; } } } } // ray/slabs intersection test... // clickRay.getOrigin().x + clickRay.getDirection().x * f = minX // clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f // clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f // -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y; if(largestEnteringValue < temp) { largestEnteringValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z; if(largestEnteringValue < temp) { largestEnteringValue = temp; } smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y; if(smallestExitingValue > temp) { smallestExitingValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z; if(smallestExitingValue < temp) { smallestExitingValue = temp; } if(largestEnteringValue > smallestExitingValue) { //System.out.println("Miss!"); } else { if (largestEnteringValue < closestEnteringValue) { closestEnteringValue = largestEnteringValue; closestDrawableHit = d; } } } if(closestDrawableHit != null) { System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z); this.worldModel.removeDrawableThing(closestDrawableHit); } } } I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all. Edit: Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center (nearer or further away depending on where on the screen im clicking) and always shots to the same position no matter where I direct my camera... My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the gluLookAt method in lwjgl; I have to build it my self and I guess thats where the problem is at)... Edit2: This is how i calculate it currently: private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}}; public Vector getCameraDirectionVector() { Vector actualEye = this.getActualEyePosition(); return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z); } public Vector getActualEyePosition() { return eye.rotate(this.getRotation()); } public void generateViewMatrix() { Vector cameraDirectionVector = getCameraDirectionVector().normalize(); Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize(); Vector up = Vector.cross(side, cameraDirectionVector); viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x; viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y; viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z; /* Vector actualEyePosition = this.getActualEyePosition(); Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize(); Vector xaxis = Vector.cross(upVector, zaxis).normalize(); Vector yaxis = Vector.cross(zaxis, xaxis); viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x; viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y; viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z; viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition); */ viewMatrix = new Matrix4f(); viewMatrix.load(getViewMatrixAsFloatBuffer()); } Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it... I have read alot of threads and documentations about this but i can't seam to wrapp my head around it... Edit3: Okay with the help of Byte56 (thanks alot for the help) i have now concluded that it's not the viewMatrix that is the problem... I still get the same messedup result; anyone that think that they can find the error in my code, i certenly can't, have bean working on this for 3 days now :(

    Read the article

  • Problem with drawing textures in OpenGL ES

    - by droidmachine
    I'm developing a 2D game for Android and i'm using the framework which has been told in the book which named Beginning Android Games by Mario Zechner.So my framework is well designed and using OpenGL 1.1.It's similar to libgdx. When i put my textures adjacent each other in my 2d surface,there are some spaces size as 1 px.But this problem only occur on my tablet.There aren't a problem like this on my phone.It's like in this picture: What can be the problem?I can't fix it from one week.

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >