Search Results

Search found 1603 results on 65 pages for 'coordinate transformation'.

Page 45/65 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Connecting two DialogBoxes in GWT

    - by Apophenia Overload
    In my GWT project, I'm trying to get it so two DialogBoxes can pass information between each other. One of them holds a MapWidget, and when a button is pressed in the other DialogBox, the position information is received from that other DialogBox's MapWidget. Does anyone have any tips for how I should coordinate between having two different DialogBoxes show up? Should I wrap the code for the two in a Composite? Furthermore, is there an example anywhere of dealing with two DialogBoxes at once in GWT? For example, if I click outside of the two boxes, both should be dismissed. I'm wondering if there's a way to keep both of them in focus at once, so I can switch between the two without causing either to disappear.

    Read the article

  • How should I think about perspectives and rotation in OpenGL ES?

    - by Omega
    As I start to write rendering code, how do I want to consider my drawing operations? Will they always be relative to a fixed coordinate system on the screen, or does this change based on the camera perspective? The best example I can try to come up with is say I'm at (0,0,0) and I draw a line to (3,3,3). If I change the perspective +1 on the X axis and conduct the same operation, does it happen at (4,3,3), or am I just getting a new view of the line still being made at (3,3,3)? When doing rotation, am I moving the point from which a frustum emanates, or am I moving the rendering underneath?

    Read the article

  • What's the best way to implement one-dimensional collision detection?

    - by cyclotis04
    I'm writing a piece of simulation software, and need an efficient way to test for collisions along a line. The simulation is of a train crossing several switches on a track. When a wheel comes within N inches of the switch, the switch turns on, then turns off when the wheel leaves. Since all wheels are the same size, and all switches are the same size, I can represent them as a single coordinate X along the track. Switch distances and wheel distances don't change in relation to each other, once set. This is a fairly trivial problem when done through brute force by placing the X coordinates in lists, and traversing them, but I need a way to do so efficiently, because it needs to be extremely accurate, even when the train is moving at high speeds. There's a ton of tutorials on 2D collision detection, but I'm not sure the best way to go about this unique 1D scenario.

    Read the article

  • AspectFit Not Working for Transformed UIImageView

    - by Dex
    I have a UIImageView inside a subview and I'm trying to get an AspectFit content mode working how I want for a 90 degree transformation. Here is a demo app of what I mean https://github.com/rdetert/image-scale-test If you run the app and hold it in a landscape orientation (app runs in landscape) you'll see that the image does not stretch from left edge to right edge. What seems to be happening is that the AspectFit works correctly for the portrait/default orientation, but then the image simply gets rotated the image after the fact. How can I make it auto stretch? I think the fact that it is inside a subview may have something to do with it?

    Read the article

  • Trying to draw textured triangles on device fails, but the emulator works. Why?

    - by Dinedal
    I have a series of OpenGL-ES calls that properly render a triangle and texture it with alpha blending on the emulator (2.0.1). When I fire up the same code on an actual device (Droid 2.0.1), all I get are white squares. This suggests to me that the textures aren't loading, but I can't figure out why they aren't loading. All of my textures are 32-bit PNGs with alpha channels, under res/raw so they aren't optimized per the sdk docs. Here's how I am loading my textures: private void loadGLTexture(GL10 gl, Context context, int reasource_id, int texture_id) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), reasource_id, sBitmapOptions); //Generate one texture pointer... gl.glGenTextures(1, textures, texture_id); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[texture_id]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //Clean up bitmap.recycle(); } Here's how I am rendering the texture: //Clear gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); //Enable vertex buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); //Push transformation matrix gl.glPushMatrix(); //Transformation matrices gl.glTranslatef(x, y, 0.0f); gl.glScalef(scalefactor, scalefactor, 0.0f); gl.glColor4f(1.0f,1.0f,1.0f,1.0f); //Bind the texture gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[textureid]); //Draw the vertices as triangles gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Pop the matrix back to where we left it gl.glPopMatrix(); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); And here are the options I have enabled: gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA,GL10.GL_ONE_MINUS_SRC_ALPHA); Edit: I just tried supplying a BitmapOptions to the BitmapFactory.decodeResource() call, but this doesn't seem to fix the issue, despite manually setting the same preferredconfig, density, and targetdensity. Edit2: As requested, here is a screenshot of the emulator working. The underlaying triangles are shown with a circle texture rendered onto it, the transparency is working because you can see the black background. Here is a shot of what the droid does with the exact same code on it: Edit3: Here are my BitmapOptions, updated the call above with how I am now calling the BitmapFactory, still the same results as below: sBitmapOptions.inPreferredConfig = Bitmap.Config.RGB_565; sBitmapOptions.inDensity = 160; sBitmapOptions.inTargetDensity = 160; sBitmapOptions.inScreenDensity = 160; sBitmapOptions.inDither = false; sBitmapOptions.inSampleSize = 1; sBitmapOptions.inScaled = false; Here are my vertices, texture coords, and indices: /** The initial vertex definition */ private static final float vertices[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f }; /** The initial texture coordinates (u, v) */ private static final float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; /** The initial indices definition */ private static final byte indices[] = { //Faces definition 0,1,3, 0,3,2 }; Is there anyway to dump the contents of the texture once it's been loaded into OpenGL ES? Maybe I can compare the emulator's loaded texture with the actual device's loaded texture? I did try with a different texture (the default android icon) and again, it works fine for the emulator but fails to render on the actual phone. Edit4: Tried switching around when I do texture loading. No luck. Tried using a constant offset of 0 to glGenTextures, no change. Is there something that I'm using that the emulator supports that the actual phone does not? Edit5: Per Ryan below, I resized my texture from 200x200 to 256x256, and the issue was NOT resolved. Edit: As requested, added the calls to glVertexPointer and glTexCoordPointer above. Also, here is the initialization of vertexBuffer, textureBuffer, and indexBuffer: ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); loadGLTextures(gl, this.context);

    Read the article

  • Anonymous iterator blocks in Clojure?

    - by Checkers
    I am using clojure.contrib.sql to fetch some records from an SQLite database. (defn read-all-foo [] (with-connection *db* (with-query-results res ["select * from foo"] (into [] res)))) Now, I don't really want to realize the whole sequence before returning from the function (i.e. I want to keep it lazy), but if I return res directly or wrap it some kind of lazy wrapper (for example I want to make a certain map transformation on result sequence), SQL-related bindings will be reset and connection will be closed after I return, so realizing the sequence will throw an exception. How can I enclose the whole function in a closure and return a kind of anonymous iterator block (like yield in C# or Python)? Or is there another way to return a lazy sequence from this function?

    Read the article

  • preventing selection on MKPointAnnotation

    - by Derek
    Is there a way to prevent an annotation in a MKMapView instance from being enabled. In other words, when the user taps the red pin on the map, is there a way to prevent it from highlighting the pin. Right now the pin turns dark when touched... Edit: I'm using the following code to return the MKPinAnnotationView // To future MKMapView users - Don't forget to set _mapView's delegate _mapView.delegate = self; _annotation = [[MKPointAnnotation alloc] init]; _annotation.coordinate = myLocation; [_mapView addAnnotation:_annotation]; -(MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id<MKAnnotation>)annotation{ MKPinAnnotationView *pin = [[MKPinAnnotationView alloc] initWithAnnotation:_annotation reuseIdentifier:@"id"]; pin.enabled = NO; return pin; }

    Read the article

  • Working with image pixels

    - by Mario
    Hey Guys, I'm trying to do a project here, which I want to implement the following: I have a rotation matrix and translation matrix are estimated, now I have an image in a certain location and I want to multiply all the image pixel by the rotation matrix and add the results to the translation matrix..... My issue is how to work with the pixels? I mean how to extract the pixel from the image in order to do the operation that I mentioned above? it's ok to give me the suggestion in either opencv or c++ *I need to know how to do this operation new_p(x,y) = old(x,y)* rotation_matrix + translation_matrix. I'm defining the image like that IplImage(), 3 channel image. For now I need to do the geometrical transformation* Thank you.

    Read the article

  • Good way to fetch XML from a remote URL, convert it to HTML and display it in a ASP.NET-page

    - by Binary255
    Hi, The use case I want to achive is. 1. Fetch XML from a remote URL. 2. Convert it to HTML using XSLT 3. Insert the generated HTML at a position in my ASP.NET web forms page. Alternative on the above, if 1 returns a 404: 2. Generate HTML which display an error message to the user. Only step 3 is left as I've completed 1-2. As there are logic for handling the two execution paths and performing the XSLT-transformation I thought it would be suitable to keep it in the code-behind file. What's a good, clean way of inserting generated HTML at a position in my ASP.NET web forms page?

    Read the article

  • Applying transformations to NSBitmapImageRep

    - by Adam
    So ... I have an image loaded into an NSBitmapImageRep object, so I am able to examine the contents of specific pixels via a two dimensional array. Now I want to apply a couple of "transformations" to the image, in preparation for some additional processing. If I was manipulating the image manually, in Photoshop, I would: Rotate the image Crop a portion of it and discard the rest Apply a "threshold" transformation (which essentially converts the image to black and white, based on the threshold value I provide) Resample the image to shrink it down a bit (which, although losing some image quality, will speed up the subsequent processing) (not necessarily in that order) Are there objective C methods available to facilitate these specific image manipulations, with the data in the NSBitmapImageRep object? If so, can someone point me to some good examples?

    Read the article

  • Automatic images translation to 3d model

    - by farrakhov-bulat
    I'm quite interested in automatic images translation to 3d models. Not really for commercial product, but from the point of possible academic research and implementation. What I'd like to achieve is almost transparent for user process of transformation series of images (fewer is better) to 3d model which might be shown in flash/silverlight/javafx or similar. Consider online furniture store with 3d models of all items in stock. Kinda cool to have ability to see the product in 3d before purchasing it. I managed to find a few pieces of software, like insight3d, but it couldn't be used in my case I guess. So, are there any similar projects or tips for me? If it would require to write that piece of software - I'd really love to dig into research on this field.

    Read the article

  • Is it possible to manipulate the format on a DataGridView that is bound to a Data Source?

    - by Jack Johnstone
    I´m using SQL Server 2005 and Visual Studio 2008, C#. In the data source (the SQL Server data table) I use the date format mm/dd/yyyy, however, in a forms overview (DataGridView) users would like to see a completely other format, with year, week number and day number of week (yyww,d). I´ve created an algorithm for this transformation, but can I populate the affected cells with yyww,d instead of mm/dd/yyyy? And in that case - how would I do it? I guess I need to do it after the cells are populated, but before they are shown. The generic question is - how do I manipulate the format of Data Source bound DataGridView cells.

    Read the article

  • How to find patterns (lines, circles,...) from a list of points?

    - by Burkhard
    I have a list of points. Each point being an x and y coordinate (both of which are integers). Now I'm trying to find known patterns, such as lines, arcs or circles, knowing that the points are not perfectly on the pattern. What's the best way to do it? I don't have many clues to get started. Edit: the points are ordered. The user is drawing something and the program should detect the best patterns. For instance, if a triangle is drawn, it should detect three lines.

    Read the article

  • Math - Convert Arbitrary Length to Range From -1.0 to 1.0?

    - by TheDarkIn1978
    how can i convert a length into a range of -1.0 to 1.0? example: my stage is 440px in length and accepts mouse events. i would like to click in the middle of the stage, and rather than an output of X = 220, i'd like it to be X = 0. similarly, i'd like the real X = 0 to become X = -1.0 and the real X = 440 to become X = 1.0. i don't have access to the stage, so i can't simply center-register it, which would make this process a lot easier. also, it's not possible to dynamically change the actual size of my stage, so i'm looking for a formula that will translate the mouse's real X coordinate of the stage to evenly fit within a range from -1 to 1.

    Read the article

  • Reading Matrices in MATLAB and assigning coordinates to the entries

    - by Michael Schofield
    Hi, I'm a bit new to MATLAB. Basically, I have a 25x25 Matrix, complete with various random entries ranging from 0 to 3. I need to write a program that reads this matrix, and assigns x-y coordinates to the entries, so that when I ask for an input of a particular x-y coordinate which has, say an entry of 3, then it will result in an error. I'm a bit overwhelmed - but I understand the general concept of what I'm supposed to be finding. I'm wondering if I should use a plot instead to help me.

    Read the article

  • using arrays to get best memory alignment and cache use, is it necessary?

    - by Alberto Toglia
    I'm all about performance these days cause I'm developing my first game engine. I'm no c++ expert but after some research I discovered the importance of the cache and the memory alignment. Basically what I found is that it is recommended to have memory well aligned specially if you need to access them together, for example in a loop. Now, In my project I'm doing my Game Object Manager, and I was thinking to have an array of GameObjects references. meaning I would have the actual memory of my objects one after the other. static const size_t MaxNumberGameObjects = 20; GameObject mGameObjects[MaxNumberGameObjects]; But, as I will be having a list of components per object -Component based design- (Mesh, RigidBody, Transformation, etc), will I be gaining something with the array at all? Anyway, I have seen some people just using a simple std::map for storing game objects. So what do you guys think? Am I better off using a pure component model?

    Read the article

  • Database/NoSQL - Lowest latecy way to retreive the following data...

    - by Nickb
    I have a real estate application and a "house" contains the following information: house: - house_id - address - city - state - zip - price - sqft - bedrooms - bathrooms - geo_latitude - geo_longitude I need to perform an EXTREMELY fast (low latency) retrieval of all homes within a geo-coordinate box. Something like the SQL below (if I were to use a database): SELECT * from houses WHERE latitude IS BETWEEN xxx AND yyy AND longitude IS BETWEEN www AND zzz Question: What would be the quickest way for me to store this information so that I can perform the fastest retrieval of data based on latitude & longitude? (e.g. database, NoSQL, memcache, etc)?

    Read the article

  • Convert arbitrary length to a value between -1.0 a 1.0?

    - by TheDarkIn1978
    How can I convert a length into a value in the range -1.0 to 1.0? Example: my stage is 440px in length and accepts mouse events. I would like to click in the middle of the stage, and rather than an output of X = 220, I'd like it to be X = 0. Similarly, I'd like the real X = 0 to become X = -1.0 and the real X = 440 to become X = 1.0. I don't have access to the stage, so i can't simply center-register it, which would make this process a lot easier. Also, it's not possible to dynamically change the actual size of my stage, so I'm looking for a formula that will translate the mouse's real X coordinate of the stage to evenly fit within a range from -1 to 1.

    Read the article

  • Freeglut ( OpenGL ) in C

    - by user1832149
    I would like to ask for you a little help in my homework. I'm learning at an university of Debrecen, and I would like to be a programmer, but I'm stuck with this homework. It's a OpenGL project in C. (freeglut) The next task is about: Window to viewport transformation. Draw graphs of functions, as shown below. The individual views (viewport) defines four different rectangles. Functions look like this picture: one window in 4 piece, and one pice of one function. The functions, respectively: f(x)=x2 g(x)=x3 h(x)=sin(x) i(x)=cos(x) Heres a pic what I need to make: How would I approach such a task?

    Read the article

  • How can I take zooming into account when a user touches a UIScrollView?

    - by Bill
    I have a UIImageView inside of a UIScrollView. The parent scroll view allows zooming and panning. When the user taps a point in the scroll view, I want to find the location in the raw image inside the UIImageView - i.e. I want the point after including any zooming and panning the user has done in the scroll view. Right now, I have a UIScrollView subclass called ForwardingScrollView that handles touch events and attempts to convert them into locations in the coordinate system of the child image view. I tried adding contentOffset to these points, tried multiplying them by zoomScale, and even tried doing both. I also tried calling [touch locationInView: self] and [touch locationInView: parent], but none of these methods correctly return the point that I clicked in the underlying image. What's the best way to do this? Thanks in advance.

    Read the article

  • Is there a slick way to deploy my silverlight app and change settings programmatically?

    - by MIke S.
    I am fairly new to web development. I am at the point of deployment (for testing). I have a few places (maybe 4 places) where I had to add a URI that was non-relative into the appliation. So now, at deployment, those need to be changed. Is there a slick way of handling this? By slick I mean not manually going through the app and changing the URIs or a blanket find and replace (too risky). I only have 4 places to change now, but this could easily change and cause deployment issues. I am using a Microsoft technology stack. Silverlight, ASP.NET, RIA, etc. Development is done in Visual Studio 2010. I noticed that the web projects have a nifty transformation for the web.config...which is nice. Is there an equivalent mechanism for silverlight resources? Any other ways? Any thoughts?

    Read the article

  • Calculating the angle between two points

    - by kingrichard2005
    I'm currently developing a simple 2D game for Android. I have a stationary object that's situated in the center of the screen and I'm trying to get that object to rotate and point to the area on the screen that the user touches. I have the constant coordinates that represent the center of the screen and I can get the coordinates of the point that the user taps on. I'm using the formula outlined in this forum: How to get angle between two points? -It says as follows "If you want the the angle between the line defined by these two points and the horizontal axis: double angle = atan2(y2 - y1, x2 - x1) * 180 / PI;". -I implemented this, but I think the fact the I'm working in screen coordinates is causing a miscalculation, since the Y-coordinate is reversed. I'm not sure if this is the right way to go about it, any other thoughts or suggestions are appreciated.

    Read the article

  • How to align C++ class member names in one column in emacs ?

    - by KotBerbelot
    I would like to align all C++ class member names ( do not confuse with member types ) in one column. Lets look at the example of what we have at entrance: class Foo { public: void method1( ); int method2( ); const Bar * method3( ) const; protected: float m_member; }; and this is what we would like to have at the end: class Foo { public: void method1( ); int method2( ); const Bar * method3( ) const; protected: float m_member; }; So the longest member type declaration defines the column to which class member names will be aligned. How can i perform such transformation in emacs ?

    Read the article

  • Split query result by half in TSQL (obtain 2 resultsets/tables)

    - by rubdottocom
    I have a query that returns a large number of heavy rows. When I transform this rows in a list of CustomObject I have a big memory peak, and this transformation is made by a custom dotnet framework that I can't modify. I need to retrieve a less number of rows to do "the transform" in two passes and then avoid the momery peak. How can I split the result of a query by half? I need to do it in DB layer. I thing to do a "Top count(*)/2" but how to get the other half? Thank you!

    Read the article

  • Using 50+ 3rd party web services, should I use BizTalk or just C#?

    - by typemismatch
    I'm building a back-end application that needs to fetch data on various schedules from over 50+ 3rd party web services and that number will continue to grow. The data from these services can currently be grouped into 3 types so each response needs to be mapped to 1 of 3 known schemas. Writing custom c# to hit each web service appears to be a management nightmare never mind having all that data mapping in code. The current thinking is build this on top of BizTalk 2009, still a lot of maintenance but at least a well defined platform with mapping/transformation capabilities already. I'm looking for any advise from anyone who might have done this before, does this really buy us anything? I am aware of a lack of polling features in BTS however there are enough work arounds to feel comfortable about the solution. Thanks!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >