Search Results

Search found 1510 results on 61 pages for 'barycentric coordinates'.

Page 17/61 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • formula for best approximation for center of 2D rotation with small angles

    - by RocketSurgeon
    This is not a homework. I am asking to see if problem is classical (trivial) or non-trivial. It looks simple on a surface, and I hope it is truly a simple problem. Have N points (N = 2) with coordinates Xn, Yn on a surface of 2D solid body. Solid body has some small rotation (below Pi/180) combined with small shifts (below 1% of distance between any 2 points of N). Possibly some small deformation too (<<0.001%) Same N points have new coordinates named XXn, YYn Calculate with best approximation the location of center of rotation as point C with coordinates XXX, YYY. Thank you

    Read the article

  • How can I encode four unsigned bytes (0-255) to a float and back again using HLSL?

    - by Statement
    Hello! I am facing a task where one of my hlsl shaders require multiple texture lookups per pixel. My 2d textures are fixed to 256*256, so two bytes should be sufficient to address any given texel given this constraint. My idea is then to put two xy-coordinates in each float, giving me eight xy-coordinates in pixel space when packed in a Vector4 format image. These eight coordinates are then used to sample another texture(s). The reason for doing this is to save graphics memory and an attempt to optimize processing time, since then I don't require multiple texture lookups. By the way: Does anyone know if encoding/decoding 16 bytes from/to 4 floats using 1 sampling is slower than 4 samplings with unencoded data?

    Read the article

  • comparator with null values.

    - by pvgoddijn
    Hi, We have some code wich sorts a list of addresses based on the distance between their coordinates. this is done through collections.sort with a custom comparator. However from time to time an adress without coordinates is in the list causing a NullPointerException. My initial idea to fix this was to have the comparator return 0 as dististance for addresses where at least one of the coordinates is null. I fear this might lead to corruption of the order the 'valid' elements in the list. so is returning a '0' values for null data in a comparator ok, or is there a cleaner way to resolve this.

    Read the article

  • Zip Code Radius Search question...

    - by KnockKnockWhosThere
    I'm wondering if it's possible to find all points by longitude and latitude within X radius of one point? So, if I provide a latitude/longitude of -76.0000, 38.0000, is it possible to simply find all the possible coordinates within (for example) a 10 mile radius of that? I know that there's a way to calculate the distance between two points, which is why I'm not clear as to whether this is possible... Because, it seems like you need to know the center coordinates (-76 and 38 in this case) as well as the coordinates of every other point in order to determine whether it falls within the specified radius... Is that right?

    Read the article

  • Python Vector Class

    - by sfjedi
    I'm coming from a C# background where this stuff is super easy—trying to translate into Python for Maya. There's gotta' be a better way to do this. Basically, I'm looking to create a Vector class that will simply have x, y and z coordinates, but it would be ideal if this class returned a tuple with all 3 coordinates and if you could edit the values of this tuple through x, y and z properties, somehow. This is what I have so far, but there must be a better way to do this than using an exec statement, right? I hate using exec statements. class Vector(object): '''Creates a Maya vector/triple, having x, y and z coordinates as float values''' def __init__(self, x=0, y=0, z=0): self.x, self.y, self.z = x, y, z def attrsetter(attr): def set_float(self, value): setattr(self, attr, float(value)) return set_float for xyz in 'xyz': exec("%s = property(fget=attrgetter('_%s'), fset=attrsetter('_%s'))" % (xyz, xyz, xyz))

    Read the article

  • Projecting a targetting ring using direct3d

    - by JohnB
    I'm trying to draw a "targetting ring" on the ground below a "unit" in a hobby 3d game I'm working on. Basically I want to project a bright red patterned ring onto the ground terrain below the unit. The only approach I can think of is this - Draw the world once as normal Draw the world a second time but in my vertex shader I have the world x,y,z coordinates of the vertex and I can pass in the coordinates of the highlighted unit - so I can calculate what the u,v coordinates in my project texture should be at that point in the world for that vertex. I'd then use the pixel shader to pick pixels from the target ring texture and blend them into the previously drawn world. I believe that should be easy, and should work but it involves me drawing the whole visible world twice as it's hard to determine exactly which polygons the targetting ring might fall onto. It seems a big overhead to draw the whole world twice, once for the normal lit textured ground, and then again just to draw the targetting ring. Is there a better approach that I'm missing?

    Read the article

  • How to get the row number of the QComboBox in QTableWidget

    - by dreamxiuhuishan
    Here is the code, but it dos not work. Who can create the code. Thanks very much! void add() { QComboBox *ziduan = new QComboBox; ziduan->addItem("??","nd"); int row =0; int col =1; QSignalMapper* signalMapper = new QSignalMapper(this); connect(ziduan, SIGNAL(currentIndexChanged(int)), signalMapper, SLOT(map())); signalMapper->setMapping(ziduan, QString("%1-%2").arg(row).arg(col)); connect(signalMapper, SIGNAL(mapped(const QString &)),this, SIGNAL(changeZiduan(const QString &))); } void sqlGenerator::changeZiduan(const QString &position) { QStringList coordinates = position.split("-"); int row = coordinates[0].toInt(); int col = coordinates[1].toInt(); }

    Read the article

  • which euler rotations can i use ?

    - by melis
    i have two cartesian coordinates. There are xyz and BIG XYZ. I want to make these are paralel each other.forexample , x paralel to X ,y paralel to Y and z paralel to Z. I use rotation matris but I have a lot of different rotation matris . for example I have 3D point in xyz cartesien coordinates and its called A. and I want to change cartesien coordinate to BIG XYZ and find the same 3D point in this coordinates its called B.Until now it is okay. But when I used different rotational matris , points were changed.what can I do? Which Euler rotations can i use?

    Read the article

  • Programming user interface advice?

    - by onurozcelik
    Hi, In my project I going to generate a user interface through programming. Scalability of this UI is very important requirement. So far I am using two dimensional graphics for generating the UI. I think there may be different solutions but for the moment I know only two. First one is supplying X,Y coordinates of each two dimensional graphic on my UI.(I do not prefer this solution because I do not want to calculate X,Y coordinates of each graphic. For the moment I don't have a logic for doing this easily) Second one(which is currently I am using now) is using layouts which organizes its contents according to size of item. In this solution I don't have to calculate X,Y coordinates of each item.(Layout is doing this for me) But this approach may have its own pitfalls. I am very new to user interface programming. Can you give me advice about this issue?

    Read the article

  • Gradient a Parallelogram

    - by nuclearpenguin
    I'm working in JavaScript drawing on a canvas, and have four coordinates to draw a parallelogram, called A, B, C, and D starting from the top-left, top-right, bottom-left, and bottom right, respectively. An example of some coordinates might be: A: (3, 3) B: (4, 3) C: (1, 0) D: (2, 0) I can draw the parallelogram just fine, but I would like to fill it in with a gradient. I want the gradient to fill in from left to right, but matching the angle of the shape. The library I use (CAKE) requires a start and stop coordinate for the gradient. My stop and start would be somewhere half way between A and C, and end somewhere half way between B and D. Of course, it is not simply EXACTLY half way because the angles at A, B, C, and D are not right angles. So given this information (the coordinates), how to I find the point on the line A - C to start, and the point on the line B - D to stop? Remember, I'm doing this in JavaScript, so I have some good Math tools at my disposal for calculation.

    Read the article

  • Rotation towards an object in 3d space

    - by retoucher
    hello, i have two coordinates on a 2d plane in 3d space, and am trying to rotate one coordinate (a vector) to face the other coordinate. my vertical axis is the y-axis, so if both of the coordinates are located flat on the 2d plane, they would both have a y-axis of 0, and their x and z coordinates determine their position length/width-wise on the plane. right now, i'm calculating the angle like so (language agnostic): angle = atan2(z2-z1,x2-x1); and am rotating/translating in space like so: pushMatrix(); rotateY(angle); popMatrix(); this doesn't seem to be working though. are my calculations/process correct?

    Read the article

  • Projection matrix + world plane ~> Homography from image plane to world plane

    - by B3ret
    I think I have my wires crossed on this, it should be quite easy. I have a projection matrix from world coordinates to image coordinates (4D homogeneous to 3D homgeneous), and therefore I also have the inverse projection matrix from image coordinates to world "rays". I want to project points of the image back onto a plane within the world (which is given of course as 4D homogeneous vector). The needed homography should be uniquely identified, yet I can not figure out how to compute it. Of course I could also intersect the back-projected rays with the world plane, but this seems not a good way, knowing that there MUST be a homography doing this for me. Thanks in advance, Ben

    Read the article

  • Android: who can help me with setting up this google maps class please??

    - by Capsud
    Hi, Firstly this has turned out to be quite a long post so please bear with me as its not too difficult but you may need to clarify something with me if i haven't explained it correctly. So with some help the other day from guys on this forum, i managed to partially set up my 'mapClass' class, but i'm having trouble with it and its not running correctly so i would like some help if possible. I will post the code below so you can see. What Ive got is a 'Dundrum' class which sets up the listView for an array of items. Then ive got a 'dundrumSelector' class which I use to set up the setOnClickListener() methods on the listItems and link them to their correct views. DundrumSelector class.. public static final int BUTTON1 = R.id.anandaAddressButton; public static final int BUTTON2 = R.id.bramblesCafeAddressButton; public static final int BUTTON3 = R.id.brannigansAddressButton; public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); int position = getIntent().getExtras().getInt("position"); if(position == 0){ setContentView(R.layout.ananda); }; if(position == 1){ setContentView(R.layout.bramblescafe); }; if(position == 2){ setContentView(R.layout.brannigans); Button anandabutton = (Button) findViewById(R.id.anandaAddressButton); anandabutton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { Intent myIntent = new Intent(view.getContext(),MapClass.class); myIntent.putExtra("button", BUTTON1); startActivityForResult(myIntent,0); } }); Button bramblesbutton = (Button) findViewById(R.id.bramblesCafeAddressButton); bramblesbutton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { Intent myIntent = new Intent(view.getContext(),MapClass.class); myIntent.putExtra("button", BUTTON2); startActivityForResult(myIntent, 0); } }); etc etc.... Then what i did was set up static ints to represent the buttons which you can see at the top of this class, the reason for this is because in my mapClass activity I just want to have one method, because the only thing that is varying is the coordinates to each location. ie. i dont want to have 100+ map classes essentially doing the same thing other than different coordinates into the method. So my map class is as follows... case DundrumSelector.BUTTON1: handleCoordinates("53.288719","-6.241179"); break; case DundrumSelector.BUTTON2: handleCoordinates("53.288719","-6.241179"); break; case DundrumSelector.BUTTON3: handleCoordinates("53.288719","-6.241179"); break; } } private void handleCoordinates(String l, String b){ mapView = (MapView) findViewById(R.id.mapView); LinearLayout zoomLayout = (LinearLayout)findViewById(R.id.zoom); View zoomView = mapView.getZoomControls(); zoomLayout.addView(zoomView, new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); mapView.displayZoomControls(true); mc = mapView.getController(); String coordinates[] = {l, b}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); p = new GeoPoint( (int) (lat*1E6), (int) (lng*1E6)); mc.animateTo(p); mc.setZoom(17); mapView.invalidate(); } Now this is where my problem is. The onClick() events don't even work from the listView to get into the correct views. I have to comment out the methods in 'DundrumSelector' before I can get into their views. And this is what I dont understand, firstly why wont the onClick() events work, because its not even on that next view where the map is. I know this is a very long post and it might be quite confusing so let me know if you want any clarification.. Just to recap, what i'm trying to do is just have one class that sets up the map coordinates, like what i'm trying to do in my 'mapClass'. Please can someone help or suggest another way of doing this! Thanks alot everyone for reading this.

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

  • Mixing Matplotlib patches with polar plot?

    - by Roger
    I'm trying to plot some data in polar coordinates, but I don't want the standard ticks, labels, axes, etc. that you get with the Matplotlib polar() function. All I want is the raw plot and nothing else, as I'm handling everything with manually drawn patches and lines. Here are the options I've considered: 1) Drawing the data with polar(), hiding the superfluous stuff (with ax.axes.get_xaxis().set_visible(False), etc.) and then drawing my own axes (with Line2D, Circle, etc.). The problem is when I call polar() and subsequently add a Circle patch, it's drawn in polar coordinates and ends up looking like an infinity symbol. Also zooming doesn't seem to work with the polar() function. 2) Skip the polar() function and somehow make my own polar plot manually using Line2D. The problem is I don't know how to make Line2D draw in polar coordinates and haven't figured out how to use a transform to do that. Any idea how I should proceed?

    Read the article

  • GoogleMaps API v3 - Need help with two "click" event scenarios. Need similar functionality to v2 AP

    - by Nathan Raley
    In version 2 of the API the map click event returned an Overlay, LatLng, Overlaylatlng. I used this to create a generic map event that would either retrieve the coordinates of the Map click event, or return the coordinates of a Marker or other type of Overlay. Now that API v3 doesn't return the Overlay or Overlaylatlng during the map click event, how can I go about creating a generic "click" event for the map that works if the user clicks on a marker or overlay? I really don't want to create a click event for each marker I have on my page as I am creating anywhere from a handful to a couple thousand markers. Also, I had to create a custom ImageMapType in order to display the StreetViewOverlay like we could do in v2 of the API because I couldn't find anywhere that told me how to add the StreetViewOverlay without the pegman icon. How can I go about retrieving the LatLng coordinates of a click on this overlay type as well?

    Read the article

  • jquery custom drag and drop

    - by samlochner
    I am trying to create functionality similar to drag and drop. I need to create my own as there will be some significant differences to the drag and drop in the jquery UI. I would like to have mousemove being called repeatedly at all times, and mousedown called every time the mouse is pressed. So I have the following code: $(document).bind('mousemove',function(e){ $("#coords").text("e.pageX: " + e.pageX + ", e.pageY: " + e.pageY); }); $(document).bind('mousedown',function(e){ }); ('coords' is the id of a div) As I move the mouse, coordinates are reported correctly in 'coords'. If I depress a mouse button and then move the mouse, coordinates are still reported correctly. But if I depress a mouse button on an image and then move the mouse, coordinates are reported correctly for a few sets, and then they seize up! Why is this happening and how can I fix it? Thanks, Sam

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • Calculating the angle between two points

    - by kingrichard2005
    I'm currently developing a simple 2D game for Android. I have a stationary object that's situated in the center of the screen and I'm trying to get that object to rotate and point to the area on the screen that the user touches. I have the constant coordinates that represent the center of the screen and I can get the coordinates of the point that the user taps on. I'm using the formula outlined in this forum: How to get angle between two points? -It says as follows "If you want the the angle between the line defined by these two points and the horizontal axis: double angle = atan2(y2 - y1, x2 - x1) * 180 / PI;". -I implemented this, but I think the fact the I'm working in screen coordinates is causing a miscalculation, since the Y-coordinate is reversed. I'm not sure if this is the right way to go about it, any other thoughts or suggestions are appreciated.

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

  • FBX 3ds max export, bad vertices

    - by instancedName
    I need to import model in OpenGL via Fbx SdK, and for testing purposes I created a simple box centered in the (0, 0, 0), length 3, in 3ds max. Here's the image: But when i exported it, and imported in the OpenGL it wasn't in the center. Then I exported it in ASCII format, and opened the file in Notepad, and really Z coordinates were 0, and 3. When I converted model to editable mesh and checked every vertex in 3ds max it had expected (+-1.5, +-1.5, +-1.5) coordinates. Can anyone help me with this one? I'm really stuck. I tried to change whole bunch of parameters in 3ds max export, but every time it changes Z koordinate.

    Read the article

  • How to detect two moving shapes overlapped?

    - by user1389813
    Given a list of circles with its coordinates (x and y) that are moving every second in different direction (South-East, South-West, North-East and North-West), and the circle will change direction if it hits the wall sort of like bouncing, so how do we detect if any of them collide or overlap with each other ? I am not sure if we can use some data structures like a Binary Search Tree because since all the coordinates vary every seconds, so the tree will have to re-build accordingly. Or can we use Vertical Sweep Line Algorithm each time ? Any ideas on how to do this in a efficient way ?

    Read the article

  • Sprite sheets, Clamp or Wrap?

    - by David
    I'm using a combination of sprite sheets for well, sprites and individual textures for infinite tiling. For the tiling textures I'm obviously using Wrap to draw the entire surface in one call but up until now I've been making a seperate batch using Clamp for drawing sprites from the sprite sheets. The sprite sheets include a border (repeating the edge pixels of each sprite) and my code uses the correct source coordinates for sprites. But since I'm never giving coordinates outside of the texture when drawing sprites (and indeed the border exists to prevent bleed over when filtering) it's struck me that I'd be better off just using Wrap so that I can combine everything into one batch. I just want to be sure that I haven't overlooked something obvious. Is there any reason that Wrap would be harmful when used with a sprite sheet?

    Read the article

  • How can I find a position between 4 vertices in a fragment shader?

    - by c4sh
    I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate. After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle. The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same. Thanks a lot for any help.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >