Search Results

Search found 2180 results on 88 pages for 'projection matrix'.

Page 64/88 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Using GPS data in Application

    - by Moka
    Hi, I want using GPS data (I got it from $GPRMC) in an desktop application(that uses from mappoint 2009). I get the latitude & longitude, but when I check these points on map, I see the result is incorrect (for example My Data is: 43.412 N, 79.369 W ; but the correct point is: 43.686 N, 79.616 W ). I guess, I must use a correction method before use; I try "Projection method" like "Miller" or "Mercator", but those aren't effective. Can anyone guide me?

    Read the article

  • OpenGL textures trigger error 1281 and strange background behavior

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. There must be something i didn't understand about applying/loading the textures. BUt the texture IS applied (nothing else is working though), the error is applied when i try to use the shader program however i checked the compilation of these and no problems were written. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great. EDIT : After narrowing the code implied by the error, here is the code that provokes it, it is called between texture loading and bos drawing : glEnableClientState(GL_VERTEX_ARRAY); //this gives the error : glUseProgram(this->shaderProgram); if (!shaderLoaded) { std::cout << "Loading default shaders" << std::endl; if(textured) loadShaderProgramm(texture_vertexSource, texture_fragmentSource); else loadShaderProgramm(default_vertexSource,default_fragmentSource); } glm::mat4 Projection = camera->getPerspective(); glm::mat4 View = camera->getView(); glm::mat4 Model = glm::mat4(1.0f); Model[0][0] *= scale_x; Model[1][1] *= scale_y; Model[2][2] *= scale_z; glm::vec3 translate_vec(this->x,this->y,this->z); glm::mat4 object_transform = glm::translate(glm::mat4(1.0f),translate_vec); glm::quat rotation = QAccumulative.getQuat(); glm::mat4 matrix_rotation = glm::mat4_cast(rotation); object_transform *= matrix_rotation; Model *= object_transform; glm::mat4 MVP = Projection * View * Model; GLuint ModelID = glGetUniformLocation(this->shaderProgram, "M"); if(ModelID ==-1) cout << "ModelID not found ..." << endl; GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP"); if(MatrixID ==-1) cout << "MatrixID not found ..." << endl; GLuint ViewID = glGetUniformLocation(this->shaderProgram, "V"); if(ViewID ==-1) cout << "ViewID not found ..." << endl; glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); glUniformMatrix4fv(ModelID, 1, GL_FALSE, &Model[0][0]); glUniformMatrix4fv(ViewID, 1, GL_FALSE, &View[0][0]); drawObject();

    Read the article

  • 3D Math: Calculate Bank (Roll) angle from Look and Up orthogonal vectors

    - by 742
    I hope this is the proper location to ask this question which is the same as this one, but expressed as pure math instead of graphically (at least I hope I translated the problem to math correctly). Considering: two vectors that are orthogonal: Up (ux, uy, uz) and Look (lx, ly, lz) a plane P which is perpendicular to Look (hence including Up) Y1 which is the projection of Y (vertical axis) along Look onto P Question: what is the value of the angle between Y1 and Up? As mathematicians will agree, this is a very basic question, but I've been scratching my head for at least two weeks without being able to visualize how to project Y onto P... maybe now too old for finding solutions to school exercises. I'm looking for the trigonometric solution, not a solution using a matrix. Thanks.

    Read the article

  • Why does UMN-Mapserver shows an ERDAS Image-File (.img) as white shape?

    - by Mnementh
    I want to render an ERDAS-Image-file (suffix .img) with the UMN-Mapserver. The data is rendered on the right position and with the correct shape, but all data is white instead of an raster-image. The Image contains many layers. My mapfile looks like this: MAP NAME "Test" WEB METADATA "wms_title" "test" "WMS_SRS" "epsg:31466 epsg:31467 epsg:31468 epsg:31469 epsg:4326 epsg:25832 epsg:3035" END LOG "test.log" IMAGEPATH "." END SHAPEPATH "." PROJECTION "init=epsg:32632" END LAYER NAME "testlayer" TYPE RASTER DATA "test.img" STATUS ON OFFSITE 0 0 0 END OUTPUTFORMAT NAME png DRIVER "GD/PNG" MIMETYPE "image/png" IMAGEMODE RGBA END END

    Read the article

  • getItemIdAtPosition() problem in Android?

    - by Praveen Chandrasekaran
    I am using the getItemIdAtPosition() to get the Basecoulmns id of the record in Sqlite Database. the code is below: protected void onListItemClick(ListView l, View v, int position, long id) { Cursor cursor = managedQuery(Constants.CONTENT_URI, PROJECTION, BaseColumns._ID + "=" + l.getItemIdAtPosition(position), null, null); } But its does not retrieves the id correctly. Is this method depends some thing at the time of setting adapter or creation of DB? I have no idea. why it shows the position of the listview. Any Idea?

    Read the article

  • NHibernate - get List<long> representing primary keys?

    - by Nathan
    I have a situation where I definitely don't want to get the whole domain object. Basically, the entity has a primary key of long (.NET)/bigint(sql server 2005). I simply need to pass the primary key to an external system which will access the database directly - and since the list of ids could be large, I don't want to rehydrate the entire domain object just to get the Id. In linq2sql, I could accomplish this with a projection, but I am restricted to NHibernate 1.2.1.4000, which doesn't support Linq. Is there a way to accomplish this using NHibernate 1.2.1.4000? (I am open to using a named-query if that will work)

    Read the article

  • [C++]Two windows fullscreen on two different screen.

    - by sirithang
    Hi! I'm actually working on an app to display an image onto a Dome. The dome projection system is constitued of two projector and a pc running a GentoO Linux and KDE, with nvidia TwinView system. Since here i've used SDL to display a fullscreen windows, and it display my app onto the two screen. But i just figured that i need to project two different images, one on each projector. That's why i search for a solution to display a fullscreen window on the first screen (projector) and another on the second. But SDL fullscreen just extend the window to the two screens. I can use any librairie (since it light and free, as i will wrap it into my small "API"), or change display setting. BTW it would be nice to have openGL support, since SDL manage only one window ^^"

    Read the article

  • Convert Lat/Longs to X/Y Co-ordinates

    - by michael
    I have the Lat/Long value of New York City, NY; 40.7560540,-73.9869510 and a flat image of the earth, 1000px × 446px. I would like to be able to convert, using Javascript, the Lat/Long to an X,Y coordinate where the point would reflect the location. So the X,Y coordinate form the Top-Left corner of the image would be; 289, 111 Things to note: don't worry about issues of what projection to use, make your own assumption or go with what you know might work X,Y can be form any corner of the image Bonus points for the same solution in PHP (but I really need the JS)

    Read the article

  • Linq order by aggregate in the select { }

    - by Joe Pitz
    Here is one I am working on: var fStep = from insp in sq.Inspections where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime && insp.Model == "EP" && insp.TestResults != "P" group insp by new { insp.TestResults, insp.FailStep } into grp select new { FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0), CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0), grp.Key.TestResults, grp.Key.FailStep, PercentFailed = Convert.ToDecimal(1.0 * grp.Count() /tcount*100) } ; I would like to orderby one or more of the fields in the select projection.

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • Hidden line removal in JavaScript or Python?

    - by feklee
    I have the following task: Input: A 3D scene comprised of a set of cuboids. Could be broken down to a set of triangles. A description of a camera: position, direction, focal length. Output: 2D wire frame projection of the scene as a set of lines. Important: Hidden lines removal should have been applied. Platform: Web app running on Google App Engine for Python. Any idea if there is a JavaScript or Python library that does this?

    Read the article

  • Hibernate Hql find result size for paginator

    - by KCore
    Hi, I need to add paginator for my Hibernate application. I applied it to some of my database operations which I perform using Criteria by setting Projection.count().This is working fine. But when I use hql to query, I can't seem to get and efficient method to get the result count. If I do query.list().size() it takes lot of time and I think hibernate does load all the objects in memory. Can anyone please suggest an efficient method to retrieve the result count when using hql?

    Read the article

  • Can I use Linq to project a new typed datarow?

    - by itchi
    I currently have a csv file that I'm parsing with an example from here: http://alexreg.wordpress.com/2009/05/03/strongly-typed-csv-reader-in-c/ I then want to loop the records and insert them using a typed dataset xsd to an Oracle database. It's not that difficult, something like: foreach (var csvItem in csvfile) { DataSet.MYTABLEDataTable DT = new DataSet.MYTABLEDataTable(); DataSet.MYTABLERow row = DT.NewMYTABLERow(); row.FIELD1 = csvItem.FIELD1; row.FIELD2 = csvItem.FIELD2; } I was wondering how I would do something with LINQ projection: var test = from csvItem in csvfile select new MYTABLERow { FIELD1 = csvItem.FIELD1, FIELD2 = csvItem.FIELD2 } But I don't think I can create datarows like this without the use of a rowbuilder, or maybe a better constructor for the datarow?

    Read the article

  • Python. Draw rectangle in basemap

    - by user2928318
    I need to add several rectangles in my basemap. I nee four rectangles with lat and log ranges as below. 1) llcrnrlon=-10, urcrnrlon=10, llcrnrlat=35,urcrnrlat=60 2) llcrnrlon=10.5, urcrnrlon=35, llcrnrlat=35,urcrnrlat=60 3) llcrnrlon=35.5, urcrnrlon=52, llcrnrlat=30,urcrnrlat=55 4) llcrnrlon=-20, urcrnrlon=35, llcrnrlat=20,urcrnrlat=34.5 My script is below. I found "polygon" packages to add lines but I do not exactly know how to do. Please help me!! Thanks a lot for your help in advance! from mpl_toolkits.basemap import Basemap m=basemaputpart.Basemap(llcrnrlon=-60, llcrnrlat=20, urcrnrlon=60, urcrnrlat=70, resolution='i', projection='cyl', lon_0=0, lat_0=45) lon1=np.array([[-180.+j*0.5 for j in range(721)] for i in range(181)]) lat1=np.array([[i*0.5 for j in range(721)] for i in range(181) ]) Nx1,Ny1=m(lon1,lat1,inverse=False) toplot=data[:,:] toplot[data==0]=np.nan toplot=np.ma.masked_invalid(toplot) plt.pcolor(Nx1,Ny1,np.log(toplot),vmin=0, vmax=5) cbar=plt.colorbar() m.drawcoastlines(zorder=2) m.drawcountries(zorder=2) plt.show()

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • Silverlight Cream for March 20, 2010 -- #815

    - by Dave Campbell
    In this Issue: Andy Beaulieu(-2-, -3-), Alex Golesh, Damian Schenkelman, Adam Kinney(-2-), Jeremy Likness, Laurent Bugnion, and John Papa. Shoutouts: Adam Kinney has a good summary up of where to go for all the tools and toys: Install checklist for Silverlight 4 RC, Blend 4 Beta and Windows Phone Developer tools from MIX10 ... tons of links Laurent Bugnion had a few announcements at MIX10: MVVM Light V3 released at #MIX10, and he followed that with What’s new in MVVM Light V3 ... now for Windows Phone! Laurent Bugnion also has announced Sample code for my #mix10 talk online From SilverlightCream.com: Physics Games in Silverlight on Windows Phone 7 Andy Beaulieu has the Physics Helper working for WP7 already... read his post, check out all the links and get going on something fun... was great seeing you at MIX, too, Andy! Silverlight 4: GPU Accelerated PlaneProjection Andy Beaulieu has a comparison up of Plane Projection with and without the new GPU acceleration... be sure to read his notes section. Silverlight 4 PathListBox for Motion Path Animation Have you heard of the PathListBox? Well, showing is better than telling, so check out Andy Beaulieu's post on it Silverlight at Windows Phone 7 Alex Golesh has a quick overview on developing a Windows Phone 7 app in Silverlight using the new toys, and executiting it in the emulator Prism v2.1: Creating a Region Adapter for the Accordion control Damian Schenkelman shows how to use the Accordian control from the toolkit as a region in a Prism app. Expression Blend 4 Beta Feature Overview available for download Adam Kinney announced the presence of an Expression Blend whitepaper as well... you should go grab that too .toolbox – Free online Silverlight and Expression Blend training Want to improve your Silverlight chops or gain some Expression Blend chops? Check out .toolbox post that Adam Kinney posted Introducing the Visual State Aggregator Jeremy Likness describes the basic panel A/panel B problem, describes ways he and other folks have flipped between them, then describes his Visual State Aggregator ... and it's downloadable for you to give it a dance! Multithreading in Windows Phone 7 emulator: A bug Laurent Bugnion found a bug wit multi-threading on the Windows Phone emulator. He confirmed this with the team, and has a workaround you'll be needing... thanks Laurent. Silverlight Overview - Technical Whitepaper John Papa has reiterated the existence of this Silverlight 4 whitepaper ... it was updated this week, and we all should be aware of it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • formula for replicating glTexGen in opengl es 2.0 glsl

    - by visualjc
    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice. I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding. I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out. I've looked at the glTexGen manpage information: g = p1xo + p2yo + p3zo + p4wo What is g? Is g the value of s in the texture2d call? I've looked at the site: http://www.opengl.org/wiki/Mathematics_of_glTexGen Another size explains the same function: coord = P1*X + P2*Y + P3*Z + P4*W I don't get how coord (an UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position. varying vec4 kOutBaseTCoord; void main() { gl_Position = WVPMatrix * vec4(POSITION, 1.0); vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0); vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0); vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0); vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0); kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane); kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane); //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane); //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane); } The frag shader precision mediump float; uniform sampler2D BaseSampler; varying mediump vec4 kOutBaseTCoord; void main() { //gl_FragColor = vec4(kOutBaseTCoord.st, 0.0, 1.0); gl_FragColor = texture2D(BaseSampler, kOutBaseTCoord.st); } I've tried texture2DProj in frag shader Here are some of the other links I've looked up http://www.gamedev.net/topic/407961-texgen-not-working-with-glsl-with-fixed-pipeline-is-ok/ Thank you in advance.

    Read the article

  • OpenGL - Cascaded shadow mapping - Texture lookup

    - by Silverlan
    I'm trying to implement cascaded shadow mapping in my engine, but I'm somewhat stuck at the last step. For testing purposes I've made sure all cascades encompass my entire scene. The result is currently this: The different intensity of the cascades is not on purpose, it's actually the problem. This is how I do the texture lookup for the shadow maps inside the fragment shader: layout(std140) uniform CSM { vec4 csmFard; // far distances for each cascade mat4 csmVP[4]; // View-Projection Matrix int numCascades; // Number of cascades to use. In this example it's 4. }; uniform sampler2DArrayShadow csmTextureArray; // The 4 shadow maps in vec4 csmPos[4]; // Vertex position in shadow MVP space float GetShadowCoefficient() { int index = numCascades -1; vec4 shadowCoord; for(int i=0;i<numCascades;i++) { if(gl_FragCoord.z < csmFard[i]) { shadowCoord = csmPos[i]; index = i; break; } } shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; return shadow2DArray(csmTextureArray,shadowCoord).x; } I then use the return value and simply multiply it with the diffuse color. That explains the different intensity of the cascades, since I'm grabbing the depth value directly from the texture. I've tried to do a depth comparison instead, but with limited success: [...] // Same code as above shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; float z = shadow2DArray(csmTextureArray,shadowCoord).x; if(z < shadowCoord.w) return 0.25f; return 1.f; } While this does give me the same shadow value everywhere, it only works for the first cascade, all others are blank: (I colored the cascades because otherwise the transitions wouldn't be visible in this case) What am I missing here?

    Read the article

  • flicker when drawing 4 models for the first time

    - by Badescu Alexandru
    i have some models that i only draw at a certain moment in the game (after some seconds since the game has started). The problem is that in that first second when i start to draw the models, i see a flicker (in the sence that everything besides those models, dissapears, the background gets purple). The flicker only lasts for that frame, and then everything seems to run the way it should. UPDATE I see now that regardless of the moment i draw the models, the first frame has always the flickering aspect What could this be about? i'll share my draw method: int temp = 0; foreach (MeshObject meshObj in ShapeList) { foreach (BasicEffect effect in meshObj.mesh.Effects) { #region color elements int i = int.Parse(meshObj.mesh.Name.ElementAt(1) + ""); int j = int.Parse(meshObj.mesh.Name.ElementAt(2) + ""); int getShapeColor = shapeColorList.ElementAt(i * 4 + j); if (getShapeColor == (int)Constants.shapeColor.yellow) effect.DiffuseColor = yellow; else if (getShapeColor == (int)Constants.shapeColor.red) effect.DiffuseColor = red; else if (getShapeColor == (int)Constants.shapeColor.green) effect.DiffuseColor = green; else if (getShapeColor == (int)Constants.shapeColor.blue) effect.DiffuseColor = blue; #endregion #region lighting effect.LightingEnabled = true; effect.AmbientLightColor = new Vector3(0.25f, 0.25f, 0.25f); effect.DirectionalLight0.Enabled = true; effect.DirectionalLight0.Direction = new Vector3(-0.3f, -0.3f, -0.9f); effect.DirectionalLight0.SpecularColor = new Vector3(.7f, .7f, .7f); Vector3 v = Vector3.Normalize(new Vector3(-100, 0, -100)); effect.DirectionalLight1.Enabled = true; effect.DirectionalLight1.Direction = v; effect.DirectionalLight1.SpecularColor = new Vector3(0.6f, 0.6f, .6f); #endregion effect.Projection = camera.projectionMatrix; effect.View = camera.viewMatrix; if (meshObj.isSetInPlace == true) { effect.World = transforms[meshObj.mesh.ParentBone.Index] * gameobject.orientation; // draw in original cube-placed position meshObj.mesh.Draw(); } else { effect.World = meshObj.Orientation; // draw inSetInPlace position meshObj.mesh.Draw(); } } temp++; }

    Read the article

  • iPhone SDK vs. Windows Phone 7 Series SDK Challenge, Part 2: MoveMe

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. If youre seeing this series for the first time, check out Part 1: Hello World. A note on methodologyin the prior post there was some feedback about lines of code not being a very good metric for this exercise.  I dont really disagree, theres a lot more to this than lines of code but I believe that is a relevant metric, even if its not the ultimate one.  And theres no perfect answer here.  So I am going to continue to report the number of lines of code that I, as a developer would need to write in these apps as a data point, and Ill leave it up to the reader to determine how that fits in with overall complexity, etc.  The first example was so basic that I think it was difficult to talk about in real terms.  I think that as these apps get more complex, the subjective differences in concept count and will be more important.  MoveMe The MoveMe app is the main end-to-end app writing example in the iPhone SDK, called Creating an iPhone Application.  This application demonstrates a few concepts, including handling touch input, how to do animations, and how to do some basic transforms. The behavior of the application is pretty simple.  User touches the button: The button does a throb type animation where it scales up and then back down briefly. User drags the button: After a touch begins, moving the touch point will drag the button around with the touch. User lets go of the button: The button animates back to its original position, but does a few small bounces as it reaches its original point, which makes the app fun and gives it an extra bit of interactivity. Now, how would I write an app that meets this spec for Windows Phone 7 Series, and how hard would it be?  Lets find out!     Implementing the UI Okay, lets build the UI for this application.  In the HelloWorld example, we did all the UI design in Visual Studio and/or by hand in XAML.  In this example, were going to use the Expression Blend 4 Beta. You might be wondering when to use Visual Studio, when to use Blend, and when to do XAML by hand.  Different people will have different takes on this, but heres mine: XAML by hand simple UI that doesnt contain animations, gradients, etc., and or UI that I want to really optimize and craft when I know exactly what I want to do. Visual Studio Basic UI layout, property setting, data binding, etc. Blend Any serious design work needs to be done in Blend, including animations, handling states and transitions, styling and templating, editing resources. As in Part 1, go ahead and fire up Visual Studio 2010 Express for Windows Phone (yes, soon it will take longer to say the name of our products than to start them up!), and create a new Windows Phone Application.  As in Part 1, clear out the XAML from the designer.  An easy way to do this is to just: Click on the design surface Hit Control+A Hit Delete Theres a little bit left over (the Grid.RowDefinitions element), just go ahead and delete that element so were starting with a clean state of only one outer Grid element. To use Blend, we need to save this project.  See, when you create a project with Visual Studio Express, it doesnt commit it to the disk (well, in a place where you can find it, at least) until you actually save the project.  This is handy if youre doing some fooling around, because it doesnt clutter your disk with WindowsPhoneApplication23-like directories.  But its also kind of dangerous, since when you close VS, if you dont save the projectits all gone.  Yes, this has bitten me since I was saving files and didnt remember that, so be careful to save the project/solution via Save All, at least once. So, save and note the location on disk.  Start Expression Blend 4 Beta, and chose File > Open Project/Solution, and load your project.  You should see just about the same thing you saw over in VS: a blank, black designer surface. Now, thinking about this application, we dont really need a button, even though it looks like one.  We never click it.  So were just going to create a visual and use that.  This is also true in the iPhone example above, where the visual is actually not a button either but a jpg image with a nice gradient and round edges.  Well do something simple here that looks pretty good. In Blend, look in the tool pane on the left for the icon that looks like the below (the highlighted one on the left), and hold it down to get the popout menu, and choose Border:    Okay, now draw out a box in the middle of the design surface of about 300x100.  The Properties Pane to the left should show the properties for this item. First, lets make it more visible by giving it a border brush.  Set the BorderBrush to white by clicking BorderBrush and dragging the color selector all the way to the upper right in the palette.  Then, down a bit farther, make the BorderThickness 4 all the way around, and the CornerRadius set to 6. In the Layout section, do the following to Width, Height, Horizontal and Vertical Alignment, and Margin (all 4 margin values): Youll see the outline now is in the middle of the design surface.  Now lets give it a background color.  Above BorderBrush select Background, and click the third tab over: Gradient Brush.  Youll see a gradient slider at the bottom, and if you click the markers, you can edit the gradient stops individually (or add more).  In this case, you can select something you like, but wheres what I chose: Left stop: #BFACCFE2 (I just picked a spot on the palette and set opacity to 75%, no magic here, feel free to fiddle these or just enter these numbers into the hex area and be done with it) Right stop: #FF3E738F Okay, looks pretty good.  Finally set the name of the element in the Name field at the top of the Properties pane to welcome. Now lets add some text.  Just hit T and itll select the TextBlock tool automatically: Now draw out some are inside our welcome visual and type Welcome!, then click on the design surface (to exit text entry mode) and hit V to go back into selection mode (or the top item in the tool pane that looks like a mouse pointer).  Click on the text again to select it in the tool pane.  Just like the border, we want to center this.  So set HorizontalAlignment and VerticalAlignment to Center, and clear the Margins: Thats it for the UI.  Heres how it looks, on the design surface: Not bad!  Okay, now the fun part Adding Animations Using Blend to build animations is a lot of fun, and its easy.  In XAML, I can not only declare elements and visuals, but also I can declare animations that will affect those visuals.  These are called Storyboards. To recap, well be doing two animations: The throb animation when the element is touched The center animation when the element is released after being dragged. The throb animation is just a scale transform, so well do that first.  In the Objects and Timeline Pane (left side, bottom half), click the little + icon to add a new Storyboard called touchStoryboard: The timeline view will appear.  In there, click a bit to the right of 0 to create a keyframe at .2 seconds: Now, click on our welcome element (the Border, not the TextBlock in it), and scroll to the bottom of the Properties Pane.  Open up Transform, click the third tab ("Scale), and set X and Y to 1.2: This all of this says that, at .2 seconds, I want the X and Y size of this element to scale to 1.2. In fact you can see this happen.  Push the Play arrow in the timeline view, and youll see the animation run! Lets make two tweaks.  First, we want the animation to automatically reverse so it scales up then back down nicely. Click in the dropdown that says touchStoryboard in Objects and Timeline, then in the Properties pane check Auto Reverse: Now run it again, and youll see it go both ways. Lets even make it nicer by adding an easing function. First, click on the Render Transform item in the Objects tree, then, in the Property Pane, youll see a bunch of easing functions to choose from.  Feel free to play with this, then seeing how each runs.  I chose Circle In, but some other ones are fun.  Try them out!  Elastic In is kind of fun, but well stick with Circle In.  Thats it for that animation. Now, we also want an animation to move the Border back to its original position when the user ends the touch gesture.  This is exactly the same process as above, but just targeting a different transform property. Create a new animation called releaseStoryboard Select a timeline point at 1.2 seconds. Click on the welcome Border element again Scroll to the Transforms panel at the bottom of the Properties Pane Choose the first tab (Translate), which may already be selected Set both X and Y values to 0.0 (we do this just to make the values stick, because the value is already 0 and we need Blend to know we want to save that value) Click on RenderTransform in the Objects tree In the properties pane, choose Bounce Out Set Bounces to 6, and Bounciness to 4 (feel free to play with these as well) Okay, were done. Note, if you want to test this Storyboard, you have to do something a little tricky because the final value is the same as the initial value, so playing it does nothing.  If you want to play with it, do the following: Next to the selection dropdown, hit the little "x (Close Storyboard) Go to the Translate Transform value for welcome Set X,Y to 50, 200, respectively (or whatever) Select releaseStoryboard again from the dropdown Hit play, see it run Go into the object tree and select RenderTransform to change the easing function. When youre done, hit the Close Storyboard x again and set the values in Transform/Translate back to 0 Wiring Up the Animations Okay, now go back to Visual Studio.  Youll get a prompt due to the modification of MainPage.xaml.  Hit Yes. In the designer, click on the welcome Border element.  In the Property Browser, hit the Events button, then double click each of ManipulationStarted, ManipulationDelta, ManipulationCompleted.  Youll need to flip back to the designer from code, after each double click. Its code time.  Here we go. Here, three event handlers have been created for us: welcome_ManipulationStarted: This will execute when a manipulation begins.  Think of it as MouseDown. welcome_ManipulationDelta: This executes each time a manipulation changes.  Think MouseMove. welcome_ManipulationCompleted: This will  execute when the manipulation ends. Think MouseUp. Now, in ManipuliationStarted, we want to kick off the throb animation that we called touchAnimation.  Thats easy: 1: private void welcome_ManipulationStarted(object sender, ManipulationStartedEventArgs e) 2: { 3: touchStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Likewise, when the manipulation completes, we want to re-center the welcome visual with our bounce animation: 1: private void welcome_ManipulationCompleted(object sender, ManipulationCompletedEventArgs e) 2: { 3: releaseStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note there is actually a way to kick off these animations from Blend directly via something called Triggers, but I think its clearer to show whats going on like this.  A Trigger basically allows you to say When this event fires, trigger this Storyboard, so its the exact same logical process as above, but without the code. But how do we get the object to move?  Well, for that we really dont want an animation because we want it to respond immediately to user input. We do this by directly modifying the transform to match the offset for the manipulation, and then well let the animation bring it back to zero when the manipulation completes.  The manipulation events do a great job of keeping track of all the stuff that you usually had to do yourself when doing drags: where you started from, how far youve moved, etc. So we can easily modify the position as below: 1: private void welcome_ManipulationDelta(object sender, ManipulationDeltaEventArgs e) 2: { 3: CompositeTransform transform = (CompositeTransform)welcome.RenderTransform; 4:   5: transform.TranslateX = e.CumulativeManipulation.Translation.X; 6: transform.TranslateY = e.CumulativeManipulation.Translation.Y; 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thats it! Go ahead and run the app in the emulator.  I suggest running without the debugger, its a little faster (CTRL+F5).  If youve got a machine that supports DirectX 10, youll see nice smooth GPU accelerated graphics, which also what it looks like on the phone, running at about 60 frames per second.  If your machine does not support DX10 (like the laptop Im writing this on!), it wont be quite a smooth so youll have to take my word for it! Comparing Against the iPhone This is an example where the flexibility and power of XAML meets the tooling of Visual Studio and Blend, and the whole experience really shines.  So, for several things that are declarative and 100% toolable with the Windows Phone 7 Series, this example does them with code on the iPhone.  In parens is the lines of code that I count to do these operations. PlacardView.m: 19 total LOC Creating the view that hosts the button-like image and the text Drawing the image that is the background of the button Drawing the Welcome text over the image (I think you could technically do this step and/or the prior one using Interface Builder) MoveMeView.m:  63 total LOC Constructing and running the scale (throb) animation (25) Constructing the path describing the animation back to center plus bounce effect (38) Beyond the code count, yy experience with doing this kind of thing in code is that its VERY time intensive.  When I was a developer back on Windows Forms, doing GDI+ drawing, we did this stuff a lot, and it took forever!  You write some code and even once you get it basically working, you see its not quite right, you go back, tweak the interval, or the math a bit, run it again, etc.  You can take a look at the iPhone code here to judge for yourself.  Scroll down to animatePlacardViewToCenter toward the bottom.  I dont think this code is terribly complicated, but its not what Id call simple and its not at all simple to get right. And then theres a few other lines of code running around for setting up the ViewController and the Views, about 15 lines between MoveMeAppDelegate, PlacardView, and MoveMeView, plus the assorted decls in the h files. Adding those up, I conservatively get something like 100 lines of code (19+63+15+decls) on iPhone that I have to write, by hand, to make this project work. The lines of code that I wrote in the examples above is 5 lines of code on Windows Phone 7 Series. In terms of incremental concept counts beyond the HelloWorld app, heres a shot at that: iPhone: Drawing Images Drawing Text Handling touch events Creating animations Scaling animations Building a path and animating along that Windows Phone 7 Series: Laying out UI in Blend Creating & testing basic animations in Blend Handling touch events Invoking animations from code This was actually the first example I tried converting, even before I did the HelloWorld, and I was pretty surprised.  Some of this is luck that this app happens to match up with the Windows Phone 7 Series platform just perfectly.  In terms of time, I wrote the above application, from scratch, in about 10 minutes.  I dont know how long it would take a very skilled iPhone developer to write MoveMe on that iPhone from scratch, but if I was to write it on Silverlight in the same way (e.g. all via code), I think it would likely take me at least an hour or two to get it all working right, maybe more if I ended up picking the wrong strategy or couldnt get the math right, etc. Making Some Tweaks Silverlight contains a feature called Projections to do a variety of 3D-like effects with a 2D surface. So lets play with that a bit. Go back to Blend and select the welcome Border in the object tree.  In its properties, scroll down to the bottom, open Transform, and see Projection at the bottom.  Set X,Y,Z to 90.  Youll see the element kind of disappear, replaced by a thin blue line. Now Create a new animation called startupStoryboard. Set its key time to .5 seconds in the timeline view Set the projection values above to 0 for X, Y, and Z. Save Go back to Visual Studio, and in the constructor, add the following bold code (lines 7-9 to the constructor: 1: public MainPage() 2: { 3: InitializeComponent(); 4:   5: SupportedOrientations = SupportedPageOrientation.Portrait; 6:   7: this.Loaded += (s, e) => 8: { 9: startupStoryboard.Begin(); 10: }; 11: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If the code above looks funny, its using something called a lambda in C#, which is an inline anonymous method.  Its just a handy shorthand for creating a handler like the manipulation ones above. So with this youll get a nice 3D looking fly in effect when the app starts up.  Here it is, in flight: Pretty cool!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • To Serve Man?

    - by Dave Convery
    Since the announcement of Windows 8 and its 'Metro' interface, the .NET community has wondered if the skills they've spent so long developing might be swept aside,in favour of HTML5 and JavaScript. Mercifully, that only seems to be true of SilverLight (as Simon Cooper points out), but it did leave me thinking how easy it is to impose a technology upon people without directly serving their needs. Case in point: QR codes. Once, probably, benign in purpose, they seem to have become a marketer's tool for determining when someone has engaged with an advert in the real world, with the same certainty as is possible online. Nobody really wants to use QR codes - it's far too much hassle. But advertisers want that data - they want to know that someone actually read their billboard / poster / cereal box, and so this flawed technology is suddenly everywhere, providing little to no value to the people who are actually meant to use it. What about 3D cinema? Profits from the film industry have been steadily increasing throughout the period that digital piracy and mass sharing has been possible, yet the industry cinema chains have forced 3D films upon a broadly uninterested audience, as a way of providing more purpose to going to a cinema, rather than watching it at home. Despite advances in digital projection, 3D cinema is scarcely more immersive to us than were William Castle's hoary old tricks of skeletons on wires and buzzing chairs were to our grandparents. iTunes - originally just a piece of software that catalogued and ripped music for you, but which is now multi-purpose bloatware; a massive, system-hogging behemoth. If it was being built for the people that used it, it would have been split into three or more separate pieces of software long ago. But as bloatware, it serves Apple primarily rather than us, stuffed with Music, Video, Various stores and phone / iPad management all bolted into one. Why? It's because, that way, you're more likely to bump into something you want to buy. You can't even buy a new laptop without finding that a significant chunk of your hard drive has been sold to 'select partners' - advertisers, suppliers of virus-busting software, and endless bloatware-flogging pop-ups that make using a new laptop without reformatting the hard drive like stepping back in time. The product you want is not the one you paid for. This is without even looking at services like Facebook and Klout, who provide a notional service with the intention of slurping up as much data about you as possible (in Klout's case, whether you create an account with them or not). What technologies do you find annoying or intrusive, and who benefits from keeping them around?

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Drawing a texture at the end of a trace (crosshair?) UDK

    - by Dave Voyles
    I'm trying to draw a crosshair at the end of my trace. If my crosshair does not hit a pawn or static mesh (ex, just a skybox) then the crosshair stays locked on a certain point at that actor - I want to say its origin. Ex: Run across a pawn, then it turns yellow and stays on that pawn. If it runs across the skybox, then it stays at one point on the box. Weird? How can I get my crosshair to stay consistent? I've included two images for reference, to help illustrate. Note: The wrench is actually my crosshair. The "X" is just a debug crosshair. Ignore that. /// Image 1 /// /// Image 2 /// /*************************************************************************** * Draws the crosshair ***************************************************************************/ function bool CheckCrosshairOnFriendly() { local float CrosshairSize; local vector HitLocation, HitNormal, StartTrace, EndTrace, ScreenPos; local actor HitActor; local MyWeapon W; local Pawn MyPawnOwner; /** Sets the PawnOwner */ MyPawnOwner = Pawn(PlayerOwner.ViewTarget); /** Sets the Weapon */ W = MyWeapon(MyPawnOwner.Weapon); /** If we don't have an owner, then get out of the function */ if ( MyPawnOwner == None ) { return false; } /** If we have a weapon... */ if ( W != None) { /** Values for the trace */ StartTrace = W.InstantFireStartTrace(); EndTrace = StartTrace + W.MaxRange() * vector(PlayerOwner.Rotation); HitActor = MyPawnOwner.Trace(HitLocation, HitNormal, EndTrace, StartTrace, true, vect(0,0,0),, TRACEFLAG_Bullet); DrawDebugLine(StartTrace, EndTrace, 100,100,100,); /** Projection for the crosshair to convert 3d coords into 2d */ ScreenPos = Canvas.Project(HitLocation); /** If we haven't hit any actors... */ if ( Pawn(HitActor) == None ) { HitActor = (HitActor == None) ? None : Pawn(HitActor.Base); } } /** If our trace hits a pawn... */ if ((Pawn(HitActor) == None)) { /** Draws the crosshair for no one - Grey*/ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(100,100,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return false; } /** Draws the crosshair for friendlies - Yellow */ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(255,255,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return true; }

    Read the article

  • Open GL polygons not displaying

    - by Darestium
    I have tried to follow nehe's opengl tutorial lesson 2. I use sfml for my window creation. The problem I have is that both the triangle and the quad don't show up on the screen: #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app, const sf::Input &input); void renderCube(sf::Window *app, sf::Clock *clock); void renderGlScene(sf::Window *app); void init(); int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 2"); app.UseVerticalSync(false); init(); while (app.IsOpened()) { processEvents(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable z-buffer and read and write glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen and the depth buffer glLoadIdentity(); // Reset the view glTranslatef(-1.5f, 0.0f, -6.0f); // Move Left 1.5 units and into the screen 6.0 glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); glTranslatef(3.0f, 0.0f, 0.0f); glBegin(GL_QUADS); // Draw a quad glVertex3f(-1.0f, 1.0f, 0.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glEnd(); } I would greatly appreciate it if someone could help me resolve my issue.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >