Search Results

Search found 951 results on 39 pages for 'surface'.

Page 12/39 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • From simple physics with a ball, to a more complicated shape

    - by Maximus
    Hello fellow game devs and stack overflowers... I recently made a transition from OpenGL ES 1.1 to 2.0 (on Android via NDK) and things are going well so far. I'm working on doing a dice rolling application (gaming dice up to 20 sided, not just regular 6 sided die) as a way to learn more about how physics is implemented in a gaming environment. I've explored implementing existing engine options (such as Bullet) and I don't think I need to implement something quite so sophisticated. I've found several tutorials that handle a lot of the general physics involved with initial trajectory, velocity, angle of contact and reflection angle, etc. I'm confident that I'd be able to implement ball-like behavior without much trouble. My question lies in when I attempt to make the interaction of the die shape with another surface more "realistic," for example... the die strikes the floor surface at such an angle where only one corner makes contact with the floor. In my mind, the center of gravity of the object would play a part in determining how the die bounces away, possibly even spinning it it faster, etc... but I am not sure what the actual math involved is. Are there any recommended resources for getting into this level of detail? Initial searches haven't turned up much... Thanks to everyone in the community, -Jeremiah

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Introducing Code Map for Visual Studio 2012 September CTP

    - by krislankford
    As part of the Visual Studio 2012 CTP for September, Visual Studio got a little sexier at helping you discover and visualize your code. The introduction of the Code Map feature helps compliment the variety of other tools that are included with Visual Studio to help you analyze and visualize your projects and solutions. Code Map leverages the dgml format within Visual Studio that is currently used b the Architecture and Modeling tools. This is a nice addition that gets us from point A to point B a little faster. The great thing about Code Map is that you can gain access to the functionality from directly within your code from the context menu. This Code Map functionality is also context specific based on your cursor. You can evaluate and add items such as methods and variables directly to the Code Map window. As you add items the Code Map surface is updated to show your new item plus any relationships and dependencies that have been introduced in your code. Something that is also very nice is that the Code Map surface is interactive and allows you to use the F12 button (Go To Definition) which can help you navigate your code especially is you are adding items that span multiple files or projects. To get started all you have to do is go out and download the September CTP for Visual Studio 2012 located here. Happy Coding!   Code Map Window

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • How to alter image pixels of a wild life bird?

    - by NoobScratcher
    Hello so I was hoping someone knew how to move or change color and position actual image pixels and could explain and show the code to do so. I know how to write pixels on a surface or screen-surface usigned int *ptr = static_cast <unsigned int *> (screen-pixels); int offset = y * (screen->pitch / sizeof(unsigned int)); ptr[offset + x] = color; But I don't know how to alter or manipulate a image pixel of a png image my thoughts on this was How do I get the values and locations of pixels and what do I have to write to make it happen? Then how do I actually change the values or locations of those gotten pixels and how do I make that happen? any ideas tip suggestions are also welcome! int main(int argc , char *argv[]) { SDL_Surface *Screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE); SDL_Surface *Image; Image = IMG_Load("image.png"); bool done = false; SDL_Event event; while(!done) { SDL_FillRect(Screen,NULL,(0,0,0)); SDL_BlitSurface(Image,NULL,Screen,NULL); while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: return 0; break; } } SDL_Flip(Screen); } return 0; }

    Read the article

  • 2D Tile Collision free movement

    - by andrepcg
    I'm coding a 3D game for a project using OpenGL and I'm trying to do tile collision on a surface. The surface plane is split into a grid of 64x64 pixels and I can simply check if the (x,y) tile is empty or not. Besides having a grid for collision, there's still free movement inside a tile. For each entity, in the end of the update function I simply increase the position by the velocity: pos.x += v.x; pos.y += v.y; I already have a collision grid created but my collide function is not great, i'm not sure how to handle it. I can check if the collision occurs but the way I handle is terrible. int leftTile = repelBox.x / grid->cellSize; int topTile = repelBox.y / grid->cellSize; int rightTile = (repelBox.x + repelBox.w) / grid->cellSize; int bottomTile = (repelBox.y + repelBox.h) / grid->cellSize; for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { if (grid->getCell(x, y) == BLOCKED){ Rect colBox = grid->getCellRectXY(x, y); Rect xAxis = Rect(pos.x - 20 / 2.0f, pos.y - 20 / 4.0f, 20, 10); Rect yAxis = Rect(pos.x - 20 / 4.0f, pos.y - 20 / 2.0f, 10, 20); if (colBox.Intersects(xAxis)) v.x *= -1; if (colBox.Intersects(yAxis)) v.y *= -1; } } } If instead of reversing the direction I set it to false then when the entity tries to get away from the wall it's still intersecting the tile and gets stuck on that position. EDIT: I've worked with Flashpunk and it has a great function for movement and collision called moveBy. Are there any simplified implementations out there so I can check them out?

    Read the article

  • Take a snapshot with JavaFX!

    - by user12610255
    JavaFX 2.2 has a "snapshot" feature that enables you to take a picture of any node or scene. Take a look at the API Documentation and you will find new snapshot methods in the javafx.scene.Scene class. The most basic version has the following signature: public WritableImage snapshot(WritableImage image) The WritableImage class (also introduced in JavaFX 2.2) lives in the javafx.scene.image package, and represents a custom graphical image that is constructed from pixels supplied by the application. In fact, there are 5 new classes in javafx.scene.image: PixelFormat: Defines the layout of data for a pixel of a given format. WritablePixelFormat: Represents a pixel format that can store full colors and so can be used as a destination format to write pixel data from an arbitrary image. PixelReader: Defines methods for retrieving the pixel data from an Image or other surface containing pixels. PixelWriter: Defines methods for writing the pixel data of a WritableImage or other surface containing writable pixels. WritableImage: Represents a custom graphical image that is constructed from pixels supplied by the application, and possibly from PixelReader objects from any number of sources, including images read from a file or URL. The API documentation contains lots of information, so go investigate and have fun with these useful new classes! -- Scott Hommel

    Read the article

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • What sort of loop structure to compare checkbox matrix with Google Maps markers?

    - by Kirkman14
    I'm trying to build a map of trails around my town. I'm using an XML file to hold all the trail data. For each marker, I have categories like "surface," "difficulty," "uses," etc. I have seen many examples of Google Maps that use checkboxes to show markers by category. However these examples are usually very simple: maybe three different checkboxes. What's different on my end is that I have multiple categories, and within each category there are several possible values. So, a particular trail might have "use" values of "hiking," "biking," "jogging," and "equestrian" because all are allowed. I put together one version, which you can see here: http://www.joshrenaud.com/pd/trails_withcheckboxes3.html In this version, any trail that has any value checked by the user will be displayed on the map. This version works. (although I should point out there is a bug where despite only one category being checked on load, all markers display anyway. After your first click on any checkbox, the map will work properly) However I now realize it's not quite what I want. I want to change it so that it will display only markers that match ALL the values that are checked (rather than ANY, which is what the example above does). I took a hack at this. You can see the result online, but I can't type a link to it because I am new user. Change the "3" in the URL above to a "4" to see it. My questions are about this SECOND url. (trails_withcheckboxes4.html) It doesn't work. I am pretty new to Javascript, so I am sure I have done something totally wrong, but I can't figure out what. My specific questions: Does anyone see anything glaringly obvious that is keeping my second example from working? If not, could someone just suggest what sort of loop structure I would need to build to compare the several arrays of checkboxes with the several arrays of values on any given marker? Here is some of the relevant code, although you can just view source on the examples above to see the whole thing: function createMarker(point,surface,difficulty,use,html) { var marker = new GMarker(point,GIcon); marker.mysurface = surface; marker.mydifficulty = difficulty; marker.myuse = use; GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); gmarkers.push(marker); return marker; } function show() { hide(); var surfaceChecked = []; var difficultyChecked = []; var useChecked = []; var j=0; // okay, let's run through the checkbox elements and make arrays to serve as holders of any values the user has checked. for (i=0; i<surfaceArray.length; i++) { if (document.getElementById('surface'+surfaceArray[i]).checked == true) { surfaceChecked[j] = surfaceArray[i]; j++; } } j=0; for (i=0; i<difficultyArray.length; i++) { if (document.getElementById('difficulty'+difficultyArray[i]).checked == true) { difficultyChecked[j] = difficultyArray[i]; j++; } } j=0; for (i=0; i<useArray.length; i++) { if (document.getElementById('use'+useArray[i]).checked == true) { useChecked[j] = useArray[i]; j++; } } //now that we have our 'xxxChecked' holders, it's time to go through all the markers and see which to show. for (var k=0; k<gmarkers.length; k++) { // this loop runs thru all markers var surfaceMatches = []; var difficultyMatches = []; var useMatches = []; var surfaceOK = false; var difficultyOK = false; var useOK = false; for (var l=0; l<surfaceChecked.length; l++) { // this loops runs through all checked Surface categories for (var m=0; m<gmarkers[k].mysurface.length; m++) { // this loops through all surfaces on the marker if (gmarkers[k].mysurface[m].childNodes[0].nodeValue == surfaceChecked[l]) { surfaceMatches[l] = true; } } } for (l=0; l<difficultyChecked.length; l++) { // this loops runs through all checked Difficulty categories for (m=0; m<gmarkers[k].mydifficulty.length; m++) { // this loops through all difficulties on the marker if (gmarkers[k].mydifficulty[m].childNodes[0].nodeValue == difficultyChecked[l]) { difficultyMatches[l] = true; } } } for (l=0; l<useChecked.length; l++) { // this loops runs through all checked Use categories for (m=0; m<gmarkers[k].myuse.length; m++) { // this loops through all uses on the marker if (gmarkers[k].myuse[m].childNodes[0].nodeValue == useChecked[l]) { useMatches[l] = true; } } } // now it's time to loop thru the Match arrays and make sure they are all completely true. for (m=0; m<surfaceMatches.length; m++) { if (surfaceMatches[m] == true) { surfaceOK = true; } else if (surfaceMatches[m] == false) {surfaceOK = false; break; } } for (m=0; m<difficultyMatches.length; m++) { if (difficultyMatches[m] == true) { difficultyOK = true; } else if (difficultyMatches[m] == false) {difficultyOK = false; break; } } for (m=0; m<useMatches.length; m++) { if (useMatches[m] == true) { useOK = true; } else if (useMatches[m] == false) {useOK = false; break; } } // And finally, if each of the three OK's is true, then let's show the marker. if ((surfaceOK == true) && (difficultyOK == true) && (useOK == true)) { gmarkers[i].show(); } } }

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • WPF 3D - Need help writing conversion methods between 2D and 3D (Point3DToPoint and PointAndZToPoint

    - by DanM
    I'm new to WPF 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D? Specifically, I need two conversion methods: Point3DToPoint - If I have an (x, y, z) coordinate in the 3D world, how do I determine the (x, y) coordinate on the projected 2D surface. Method signature: public Point Point3DToPoint(Point3D point3D) PointAndZToPoint3D - If I have an (x, y) coordinate on the projected 2D surface and a z location in the 3D world, how do I determine the intersecting (x, y, z) coordinate in the 3D world? Method signature: public Point3D PointAndZToPoint3D(Point point, double z) I'd like the 2D coordinate to be the location measured from the upper-left corner of Viewport3D and the 3D coordinate to be the location relative to the origin (0, 0, 0) of the 3D world. Note 1: I found this related question, but it only addresses conversion from 3D to 2D (not the reverse), and I'm not sure if the answers are up-to-date. Note 2: I'm currently using .NET 3.5, but if there are improvements in .NET 4.0 that would help me, please let me know.

    Read the article

  • Covering Earth with Hexagonal Map Tiles

    - by carrier
    Many strategy games use hexagonal tiles. One of the main advantages is that the distance between the center of any tile and all its neighboring tiles is the same. I was wondering if anyone has any thoughts on marrying a hexagonal tile system with the traditional geographic system (longitude/latitude). I think it would be interesting to cover a globe with hexagonal tiles and be able to map a geographic coordinate to a tile. Has anyone seen anything remotely close to this before? UPDATE I'm looking for a way to subdivide the surface of a sphere so that each division has the same surface area. Ideally, the centers of adjacent sub-divisions would be equidistant.

    Read the article

  • Android source code not working, reading frame buffer through glReadPixels

    - by Muhammad Ali Rajput
    Hi, I am new to Android development and have an assignment to read frame buffer data after a specified interval of time. I have come up with the following code: public class mainActivity extends Activity { Bitmap mSavedBM; private EGL10 egl; private EGLDisplay display; private EGLConfig config; private EGLSurface surface; private EGLContext eglContext; private GL11 gl; protected int width, height; //Called when the activity is first created. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // get the screen width and height DisplayMetrics dm = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(dm); int screenWidth = dm.widthPixels; int screenHeight = dm.heightPixels; String SCREENSHOT_DIR = "/screenshots"; initGLFr(); //GlView initialized. savePixels( 0, 10, screenWidth, screenHeight, gl); //this gets the screen to the mSavedBM. saveBitmap(mSavedBM, SCREENSHOT_DIR, "capturedImage"); //Now we need to save the bitmap (the screen capture) to some location. setContentView(R.layout.main); //This displays the content on the screen } private void initGLFr() { egl = (EGL10) EGLContext.getEGL(); display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); int[] ver = new int[2]; egl.eglInitialize(display, ver); int[] configSpec = {EGL10.EGL_NONE}; EGLConfig[] configOut = new EGLConfig[1]; int[] nConfig = new int[1]; egl.eglChooseConfig(display, configSpec, configOut, 1, nConfig); config = configOut[0]; eglContext = egl.eglCreateContext(display, config, EGL10.EGL_NO_CONTEXT, null); surface = egl.eglCreateWindowSurface(display, config, SurfaceHolder.SURFACE_TYPE_GPU, null); egl.eglMakeCurrent(display, surface, surface, eglContext); gl = (GL11) eglContext.getGL(); } public void savePixels(int x, int y, int w, int h, GL10 gl) { if (gl == null) return; synchronized (this) { if (mSavedBM != null) { mSavedBM.recycle(); mSavedBM = null; } } int b[] = new int[w * (y + h)]; int bt[] = new int[w * h]; IntBuffer ib = IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA,GL10.GL_UNSIGNED_BYTE,ib); for (int i = 0, k = 0; i < h; i++, k++) { //OpenGLbitmap is incompatible with Android bitmap //and so, some corrections need to be done. for (int j = 0; j < w; j++) { int pix = b[i * w + j]; int pb = (pix >> 16) & 0xff; int pr = (pix << 16) & 0x00ff0000; int pix1 = (pix & 0xff00ff00) | pr | pb; bt[(h - k - 1) * w + j] = pix1; } } Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888); synchronized (this) { mSavedBM = sb; } } static String saveBitmap(Bitmap bitmap, String dir, String baseName) { try { File sdcard = Environment.getExternalStorageDirectory(); File pictureDir = new File(sdcard, dir); pictureDir.mkdirs(); File f = null; for (int i = 1; i < 200; ++i) { String name = baseName + i + ".png"; f = new File(pictureDir, name); if (!f.exists()) { break; } } if (!f.exists()) { String name = f.getAbsolutePath(); FileOutputStream fos = new FileOutputStream(name); bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos); fos.flush(); fos.close(); return name; } } catch (Exception e) { } finally { //if (fos != null) { // fos.close(); // } } return null; } } Also, if some one can direct me to better way to read the framebuffer it would be great. I am using Android 2.2 and virtual device of API level 8. I have gone through many previous discussions and have found that we can not know read frame buffer directly throuh the "/dev/graphics/fb0". Thanks, Muhammad Ali

    Read the article

  • Take screenshot of DirectX full-screen application

    - by iconiK
    This boggles me. DirectX bypasses everything and talks directly to the device driver, thus GDI and other usual methods won't work - unless Aero is disabled (or unavailable), all that appears is a black rectangle at the top left of the screen. I have tried what other have suggested on several forums, using DirectX to get the back buffer and save it, but I get the same result: device-GetFrontBufferData(0, surface); D3DXSaveSurfaceToFile("fileName", D3DXIFF_BMP, surface, NULL, NULL); Is there any way to get a screenshot of another full-screen DirectX application when Aero is enabled?

    Read the article

  • Interpolation of scattered data: What could I do?

    - by Simon
    Hi! I need your help. I'm working on a 3D chart in Java using Java 3D. It should be able to display a bunch of measured values. As measured, the data I get is scattered. This means I will have to interpolate the missing points in order to get my surface plotted nicely. I didn't study all that 3D-Geometry stuff yet and I don't know where to start. My idea is to triangulate the points to a surface and then, based on the triangulation, interpolate the missing points. (see this to have a rough idea of what I want to achieve) Does someone have experiences with the interpolation of scattered data? Is my approach the right one? If yes, what kind of data structures and algorithms will I need in order to triangulate my points cloud?

    Read the article

  • formula for best approximation for center of 2D rotation with small angles

    - by RocketSurgeon
    This is not a homework. I am asking to see if problem is classical (trivial) or non-trivial. It looks simple on a surface, and I hope it is truly a simple problem. Have N points (N = 2) with coordinates Xn, Yn on a surface of 2D solid body. Solid body has some small rotation (below Pi/180) combined with small shifts (below 1% of distance between any 2 points of N). Possibly some small deformation too (<<0.001%) Same N points have new coordinates named XXn, YYn Calculate with best approximation the location of center of rotation as point C with coordinates XXX, YYY. Thank you

    Read the article

  • UserControl Shadow

    - by noober
    Hello all, I have a user control, MBControl. Here is the code: <my:MBControl Name="MBControl" HorizontalAlignment="Center" VerticalAlignment="Center"> <my:MBControl.BitmapEffect> <DropShadowBitmapEffect Color="Black" Direction="315" Softness="0.5" ShadowDepth="10" Opacity="1" /> </my:MBControl.BitmapEffect> </my:MBControl> The problem with the code is it seems like the shadow is applied to every child element of my user control. Or, possibly, it is dropped inside as well as outside -- the control surface is darker than without the shadow. How could I fix this? I want the shadow being dropped outside only and not affecting the control surface.

    Read the article

  • Looking for design/architecture suggestions for a simple HTML game.

    - by z-boss
    Imagine that HTML page is a game surface (see picture). User can have n number of boards (blue divs) on his page. Each board can be moved, re-sized, relabeled, created new and removed. Inside each board there are m number of figures (purple divs). Each of these user can move inside the board or to another board, re-size, change color and label, delete, add new. The goal of the game is not important, but let's say it is to rearrange figures in a certain way so that they disappear. But the goal of the programmer is to save the whole game surface in the database for every user of the site, and to load it later when he returns. So, how do I go about data exchange between client and the database? I'll give my idea in one of the answers.

    Read the article

  • Searching Techniques/Algorithms for Resources over a given area

    - by Raydon
    I have a flat area with nodes randomly placed on this flat surface. I need techniques which are able to take a starting point, move in a certain way (the algorithm), find nodes and continue searching. I do not have an overall view of the surface (i.e. I cannot see everything), only a limited view (i.e. 4 cells in any direction). Ideally, these methods would be efficient in the way that they work. Any points in the right direction would be greatly appreciated.

    Read the article

  • Problem implementing Blinn–Phong shading model

    - by Joe Hopfgartner
    I did this very simple, perfectly working, implementation of Phong Relflection Model (There is no ambience implemented yet, but that doesn't bother me for now). The functions should be self explaining. /** * Implements the classic Phong illumination Model using a reflected light * vector. */ public class PhongIllumination implements IlluminationModel { @RGBParam(r = 0, g = 0, b = 0) public Vec3 ambient; @RGBParam(r = 1, g = 1, b = 1) public Vec3 diffuse; @RGBParam(r = 1, g = 1, b = 1) public Vec3 specular; @FloatParam(value = 20, min = 1, max = 200.0f) public float shininess; /* * Calculate the intensity of light reflected to the viewer . * * @param P = The surface position expressed in world coordinates. * * @param V = Normalized viewing vector from surface to eye in world * coordinates. * * @param N = Normalized normal vector at surface point in world * coordinates. * * @param surfaceColor = surfaceColor Color of the surface at the current * position. * * @param lights = The active light sources in the scene. * * @return Reflected light intensity I. */ public Vec3 shade(Vec3 P, Vec3 V, Vec3 N, Vec3 surfaceColor, Light lights[]) { Vec3 surfaceColordiffused = Vec3.mul(surfaceColor, diffuse); Vec3 totalintensity = new Vec3(0, 0, 0); for (int i = 0; i < lights.length; i++) { Vec3 L = lights[i].calcDirection(P); N = N.normalize(); V = V.normalize(); Vec3 R = Vec3.reflect(L, N); // reflection vector float diffuseLight = Vec3.dot(N, L); float specularLight = Vec3.dot(V, R); if (diffuseLight > 0) { totalintensity = Vec3.add(Vec3.mul(Vec3.mul( surfaceColordiffused, lights[i].calcIntensity(P)), diffuseLight), totalintensity); if (specularLight > 0) { Vec3 Il = lights[i].calcIntensity(P); Vec3 Ilincident = Vec3.mul(Il, Math.max(0.0f, Vec3 .dot(N, L))); Vec3 intensity = Vec3.mul(Vec3.mul(specular, Ilincident), (float) Math.pow(specularLight, shininess)); totalintensity = Vec3.add(totalintensity, intensity); } } } return totalintensity; } } Now i need to adapt it to become a Blinn-Phong illumination model I used the formulas from hearn and baker, followed pseudocodes and tried to implement it multiple times according to wikipedia articles in several languages but it never worked. I just get no specular reflections or they are so weak and/or are at the wrong place and/or have the wrong color. From the numerous wrong implementations I post some little code that already seems to be wrong. So I calculate my Half Way vector and my new specular light like so: Vec3 H = Vec3.mul(Vec3.add(L.normalize(), V), Vec3.add(L.normalize(), V).length()); float specularLight = Vec3.dot(H, N); With theese little changes it should already work (maby not with correct intensity but basically it should be correct). But the result is wrong. Here are two images. Left how it should render correctly and right how it renders. If i lower the shininess factor you can see a little specular light at the top right: Altough I understand the concept of Phong illumination and also the simplified more performant adaptaion of blinn phong I am trying around for days and just cant get it to work. Any help is appriciated. Edit: I was made aware of an error by this answer, that i am mutiplying by |L+V| instead of dividing by it when calculating H. I changed to deviding doing so: Vec3 H = Vec3.mul(Vec3.add(L.normalize(), V), 1/Vec3.add(L.normalize(), V).length()); Unfortunately this doesnt change much. The results look like this: and if I rise the specular constant and lower the shininess You can see the effects more clearly in a smilar wrong way: However this division just the normalisation. I think I am missing one step. Because the formulas like this just dont make sense to me. If you look at this picture: http://en.wikipedia.org/wiki/File:Blinn-Phong_vectors.svg The projection of H to N is far less than V to R. And if you imagine changing the vector V in the picture the angle is the same when the viewing vector is "on the left side". and becomes more and more different when going to the right. I pesonally would multiply the whole projection by two to become something similiar (and the hole point is to avoid the calculation of R). Altough I didnt read anythinga bout that anywehre i am gonna try this out... Result: The intension of the specular light is far too much (white areas) and the position is still wrong. I think I am messing something else up because teh reflection are just at the wrong place. But what? Edit: Now I read on wikipedia in the notes that the angle of N/H is in fact approximalty half or V/R. To compensate that i should multiply my shineness exponent by 4 rather than my projection. If i do that I end up with this: Far to intense but still one thing. The projection is at the wrong place. Where could i mess up my vectors?

    Read the article

  • 3d user experience with HTML5 and Javascript

    - by chako
    I've to build a 3D user experience with HTML5 and any required JS library which provides such functionality. 3D scene consists of a cylindrical pipe and surface. It has 360 degree rotation and can zoom in and out. As user selects a parameter, specific value of that parameter at various depth of pipe in surface should display. I've searched for HTML5 3d and JS libraries and found three.js could help for this.Also found this useful presentation on HTML 3d engine: http://projects.mariusgundersen.net/OnGameStart/#1 .But as this is my first time with HTML5 3d modeling, how should I initiate to build ? What parameters should be considered ? Which tools and libraries would best fit for such requirements ? I would like to create a 3d model using HTML5 and JS 3d engine as shown in the 1st image.

    Read the article

  • Alternative approach to OpenGL View on top of Camera

    - by Mr. Roland
    Hi, the traditional way of doing a camera preview background with OpenGL on the front is to take two SurfaceViews(one for the camera another for OpenGL) and stack them on top of each other. The problem is that stacking SurfaceViews is discouraged: http://groups.google.com/group/android-developers/browse_thread/thread/4850fe5c314a3dc6 So what alternatives are there? I was considering the following: Subclass GlSurfaceView, and then call set the Camera preview onto the holder of this subclass: Camera.setPreviewDisplay(mHolder); Don't use GLSurfaceView, instead create your own SurfaceView subclass where you display the Camera preview onto the holder and also draw your openGL. This would require to use OpenGL without GLSurfaceView, has anyone done this before? I'm not sure if this even is possible or makes sense, since it implies displaying the camera preview onto the holder of the surface and at the same time drawing OpenGL on the same surface. Is there any other sensible alternative to solving the problem without using two SurfaceViews? Thanks!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >