Search Results

Search found 24734 results on 990 pages for 'floating point conversion'.

Page 110/990 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Heightmap in Shader not working

    - by CSharkVisibleBasix
    I'm trying to implement GPU based geometry clipmapping and have problems to apply a simple heightmap to my terrain. For the heightmap I use a simple texture with the surface format "single". I've taken the texture from here. To apply it to my terrain, I use the following shader code: texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; Vertex Shader: float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; The complete vertex shader (also containig clipmapping transforms) looks like this: float4x4 View : View; float4x4 Projection : Projection; float3 CameraPos : CameraPosition; float LODscale; //The LOD ring index 0:highest x:lowest float2 Position; //The position of the part in the world texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; //Accept only float2 because we extract 3rd dim out of Heightmap float4 wireframe_VS(float2 pos : POSITION) : POSITION{ float4x4 worldMatrix = float4x4( LODscale, 0, 0, 0, 0, LODscale, 0, 0, 0, 0, LODscale, 0, - Position.x * 64 * LODscale + CameraPos.x, 0, Position.y * 64 * LODscale + CameraPos.z, 1); float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; float4 viewPos = mul(worldPos, View); float4 projPos = mul(viewPos, Projection); return projPos; }

    Read the article

  • Shadow mapping: what is the light looking at?

    - by PgrAm
    I'm all set to set up shadow mapping in my 3d engine but there is one thing I am struggling to understand. The scene needs to be rendered from the light's point of view so I simply first move my camera to the light's position but then I need to find out which direction the light is looking. Since its a point light its not shining in any particular direction. How do I figure out what the orientation for the light point of view is?

    Read the article

  • Why do my gvfs mounts not show up under ~/.gvfs?

    - by kynan
    From what I read, when mounting a network share via nautilus or gvfs-mount the mount point should be in ~/.gvfs. This seems not to be the case for me: I tried mounting both an FTP and SMB share via both nautilus and gvfs-mount under both Ubuntu Maverick and Natty and in none of the cases did I see any mount point under ~/.gvfs. I can access the shares just find in nautilus, but I want to have access via the command line, which is why I need a mount point in the file system. Edit: Debugging following James Henstridge's answer and enzotib's comment revealed that on my laptop gvfs-fuse-daemon is running and consequently gvfs mounts show up in ~/.gvfs, whereas on the 2 workstations where ~/.gvfs remained empty gvfs-fuse-daemon was not running. On all 3 machines there are other gvfs processes running: gvfsd, gvfs-afc-volume-monitor, ... On the laptop, mount | fgrep gvfs yields gvfs-fuse-daemon on /home/xxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxx) That raises the questions: How are shares mounted without gvfs-fuse-daemon running? Is there no mount point created in that case and is every access to the share a gvfs library call? Which daemon is responsible? gvfsd? What's the role of gvfs-fuse-daemon? Does it only create a fuse mount point in ~/.gvfs?

    Read the article

  • Creating SparseImages for Pivot

    - by John Conwell
    Learning how to programmatically make collections for Microsoft Live Labs Pivot has been a pretty interesting ride. There are very few examples out there, and the folks at MS Live Labs are often slow on any feedback.  But that is what Reflector is for, right? Well, I was creating these InfoCard images (similar to the Car images in the "New Cars" sample collection that that MS created for Pivot), and wanted to put a Tag Cloud into the info card.  The problem was the size of the tag cloud might vary in order for all the tags to fit into the tag cloud (often times being bigger than the info card itself).  This was because the varying word lengths and calculated font sizes. So, to fix this, I made the tag cloud its own separate image from the info card.  Then, I would create a sparse image out of the two images, where the tag cloud fit into a small section of the info card.  This would allow the user to see the info card, but then zoom into the tag cloud and see all the tags at a normal resolution.  Kind'a cool. But...I couldn't find one code example (not one!) of how to create a sparse image.  There is one page on the SeaDragon site (http://www.seadragon.com/developer/creating-content/deep-zoom-tools/) that gives over the API for creating images and collections, and it sparsely goes over how to create a sparse image, but unless you are familiar with the API already, the documentation doesn't help very much. The key is the Image.ViewportWidth and Image.ViewportOrigin properties of the image that is getting super imposed on the main image.  I'll walk through the code below.  I've setup a couple Point structs to represent the parent and sub image sizes, as well as where on the parent I want to position the sub image.  Next, create the parent image.  This is pretty straight forward.  Then I create the sub image.  Then I calculate several ratios; the height to width ratio of the sub image, the width ratio of the sub image to the parent image, the height ratio of the sub image to the parent image, then the X and Y coordinates on the parent image where I want the sub image to be placed represented as a ratio of the position to the parent image size. After all these ratios have been calculated, I use them to calculate the Image.ViewportWidth and Image.ViewportOrigin values, then pass the image objects into the SparseImageCreator and call Create. The key thing that was really missing from the API documentation page is that when setting up your sub images, everything is expressed in a ratio in relation to the main parent image.  If I had known this, it would have saved me a lot of trial and error time.  And how did I figure this out?  Reflector of course!  There is a tool called Deep Zoom Composer that came from MS Live Labs which can create a sparse image.  I just dug around the tool's code until I found the method that create sparse images.  But seriously...look at the API documentation from the SeaDragon size and look at the code below and tell me if the documentation would have helped you at all.  I don't think so!   public static void WriteDeepZoomSparseImage(string mainImagePath, string subImagePath, string destination) {     Point parentImageSize = new Point(720, 420);     Point subImageSize = new Point(490, 310);     Point subImageLocation = new Point(196, 17);     List<Image> images = new List<Image>();     //create main image     Image mainImage = new Image(mainImagePath);     mainImage.Size = parentImageSize;     images.Add(mainImage);     //create sub image     Image subImage = new Image(subImagePath);     double hwRatio = subImageSize.X/subImageSize.Y;            // height width ratio of the tag cloud     double nodeWidth = subImageSize.X/parentImageSize.X;        // sub image width to parent image width ratio     double nodeHeight = subImageSize.Y / parentImageSize.Y;    // sub image height to parent image height ratio     double nodeX = subImageLocation.X/parentImageSize.X;       //x cordinate position on parent / width of parent     double nodeY = subImageLocation.Y / parentImageSize.Y;     //y cordinate position on parent / height of parent     subImage.ViewportWidth = (nodeWidth < double.Epsilon) ? 1.0 : (1.0 / nodeWidth);     subImage.ViewportOrigin = new Point(         (nodeWidth < double.Epsilon) ? -1.0 : (-nodeX / nodeWidth),         (nodeHeight < double.Epsilon) ? -1.0 : ((-nodeY / nodeHeight) / hwRatio));     images.Add(subImage);     //create sparse image     SparseImageCreator creator = new SparseImageCreator();     creator.Create(images, destination); }

    Read the article

  • I want to host clients' websites, but not their email. What's the easiest way to handle this?

    - by Phil
    My company lets non-technical users build their own niche industry websites on our server, which we host. they can currently point their nameservers at their registrar to us, which ends up with them no longer having access to their email if they've already set it up through said registrar. We don't want to interfere with their existing email, nor do we want to get into the business of setting up email for them through our service. Thus, having them point A records/cname to us would work, but is this too complex for a non-technie user? We thought of having them point nameservers to us but pointing the MX records back to them, but this is also beyond their scope. Is there an easy way to 'point records' at their initial state? Any other ideas/feedback?

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • Should I use an interface when methods are only similar?

    - by Joshua Harris
    I was posed with the idea of creating an object that checks if a point will collide with a line: public class PointAndLineSegmentCollisionDetector { public void Collides(Point p, LineSegment s) { // ... } } This made me think that if I decided to create a Box object, then I would need a PointAndBoxCollisionDetector and a LineSegmentAndBoxCollisionDetector. I might even realize that I should have a BoxAndBoxCollisionDetector and a LineSegmentAndLineSegmentCollisionDetector. And, when I add new objects that can collide I would need to add even more of these. But, they all have a Collides method, so everything I learned about abstraction is telling me, "Make an interface." public interface CollisionDetector { public void Collides(Spatial s1, Spatial s2); } But now I have a function that only detects some abstract class or interface that is used by Point, LineSegment, Box, etc.. So if I did this then each implementation would have to to a type check to make sure that the types are the appropriate type because the collision algorithm is different for each different type match up. Another solution could be this: public class CollisionDetector { public void Collides(Point p, LineSegment s) { ... } public void Collides(LineSegment s, Box b) { ... } public void Collides(Point p, Box b) { ... } // ... } But, this could end up being a huge class that seems unwieldy, although it would have simplicity in that it is only a bunch of Collide methods. This is similar to C#'s Convert class. Which is nice because it is large, but it is simple to understand how it works. This seems to be the better solution, but I thought I should open it for discussion as a wiki to get other opinions.

    Read the article

  • Applications: The Mathematics of Movement, Part 3

    - by TechTwaddle
    Previously: Part 1, Part 2 As promised in the previous post, this post will cover two variations of the marble move program. The first one, Infinite Move, keeps the marble moving towards the click point, rebounding it off the screen edges and changing its direction when the user clicks again. The second version, Finite Move, is the same as first except that the marble does not move forever. It moves towards the click point, rebounds off the screen edges and slowly comes to rest. The amount of time that it moves depends on the distance between the click point and marble. Infinite Move This case is simple (actually both cases are simple). In this case all we need is the direction information which is exactly what the unit vector stores. So when the user clicks, you calculate the unit vector towards the click point and then keep updating the marbles position like crazy. And, of course, there is no stop condition. There’s a little more additional code in the bounds checking conditions. Whenever the marble goes off the screen boundaries, we need to reverse its direction.  Here is the code for mouse up event and UpdatePosition() method, //stores the unit vector double unitX = 0, unitY = 0; double speed = 6; //speed times the unit vector double incrX = 0, incrY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     double x = e.X - marble1.x;     double y = e.Y - marble1.y;     //calculate distance between click point and current marble position     double lenSqrd = x * x + y * y;     double len = Math.Sqrt(lenSqrd);     //unit vector along the same direction (from marble towards click point)     unitX = x / len;     unitY = y / len;     timer1.Enabled = true; } private void UpdatePosition() {     //amount by which to increment marble position     incrX = speed * unitX;     incrY = speed * unitY;     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;         unitX *= -1;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;         unitX *= -1;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;         unitY *= -1;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;         unitY *= -1;     } } So whenever the user clicks we calculate the unit vector along that direction and also the amount by which the marble position needs to be incremented. The speed in this case is fixed at 6. You can experiment with different values. And under bounds checking, whenever the marble position goes out of bounds along the x or y direction we reverse the direction of the unit vector along that direction. Here’s a video of it running;   Finite Move The code for finite move is almost exactly same as that of Infinite Move, except for the difference that the speed is not fixed and there is an end condition, so the marble comes to rest after a while. Code follows, //unit vector along the direction of click point double unitX = 0, unitY = 0; //speed of the marble double speed = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     double x = 0, y = 0;     double lengthSqrd = 0, length = 0;     x = e.X - marble1.x;     y = e.Y - marble1.y;     lengthSqrd = x * x + y * y;     //length in pixels (between click point and current marble pos)     length = Math.Sqrt(lengthSqrd);     //unit vector along the same direction as vector(x, y)     unitX = x / length;     unitY = y / length;     speed = length / 12;     timer1.Enabled = true; } private void UpdatePosition() {     marble1.x += speed * unitX;     marble1.y += speed * unitY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;         unitX *= -1;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;         unitX *= -1;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;         unitY *= -1;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;         unitY *= -1;     }     //reduce speed by 3% in every loop     speed = speed * 0.97f;     if ((int)speed <= 0)     {         timer1.Enabled = false;     } } So the only difference is that the speed is calculated as a function of length when the mouse up event occurs. Again, this can be experimented with. Bounds checking is same as before. In the update and draw cycle, we reduce the speed by 3% in every cycle. Since speed is calculated as a function of length, speed = length/12, the amount of time it takes speed to reach zero is directly proportional to length. Note that the speed is in ‘pixels per 40ms’ because the timeout value of the timer is 40ms.  The readability can be improved by representing speed in ‘pixels per second’. This would require you to add some more calculations to the code, which I leave out as an exercise. Here’s a video of this second version,

    Read the article

  • The Oracle Platform

    - by Naresh Persaud
    Today’s enterprises typically create identity management infrastructures using ad-hoc, multiple point solutions. Relying on point solutions introduces complexity and high cost of ownership leading many organizations to rethink this approach. In a recent worldwide study of 160 companies conducted by Aberdeen Research, there was a discernible shift in this trend as businesses are now looking to move away from the point solution approach from multiple vendors and adopt an integrated platform approach. By deploying a comprehensive identity and access management strategy using a single platform, companies are saving as much as 48% in IT costs, while reducing audit deficiencies by nearly 35%. According to Aberdeen's research, choosing an integrated suite or “platform” of solutions for Identity Management from a single vendor can have many advantages over choosing “point solutions” from multiple vendors. The Oracle Identity Management Platform is uniquely designed to offer several compelling benefits to our customers.  Shared Services: Instead of separate solutions for - Administration, Authentication, Authorization, Audit and so on–  Oracle Identity Management offers a set of share services that allows these services to be consumed by each component in the stack and by developers of new applications  Actionable Intelligence: The most compelling benefit of the Oracle platform is ” Actionable intelligence” which means if there is a compliance violation, the same platform can fix it. And If a user is logging in from an un-trusted device or we detect an attack and act proactively on that information. Suite Interoperability: With the oracle platform the components all connect and integrated with each other. So if an organization purchase the platform for provisioning and wants to manage access, then the same platform can offer access management which leads to cost savings. Extensible and Configurable: With point solutions – you typically get limited ability to extend the tool to address custom requirements. But with the Oracle platform all of the components have a common way to extend the UI and behavior Find out more about the Oracle Platform approach in this presentation. Platform approach-series-the oracleplatform-final View more PowerPoint from OracleIDM

    Read the article

  • Is there a way to group 2 or 3 gui windows so that they don't get lost behind other open windows?

    - by Gonzalo
    For instance floating panels and main window in Gimp are independent windows. If I change focus to a full window (e.g. Firefox by doing Alt-Shift) and go back to the main Gimp window I don't get back the floating panels also (I have to change to them as well in order to see them). It would be great if the 3 windows can be "tied" (or linked) together in order that they don't get lost behind other open windows when I change back to (make active window) any of them? I think this configuration (if it exists) should show itself more obviously in the gnome environment. This question seems to address the same problem but it doesn't seem to be accurately answered.

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • Implement Fast Inverse Square Root in Javascript?

    - by BBz
    The Fast Inverse Square Root from Quake III seems to use a floating-point trick. As I understand, floating-point representation can have some different implementations. So is it possible to implement the Fast Inverse Square Root in Javascript? Would it return the same result? float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; i = 0x5f3759df - ( i >> 1 ); y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; }

    Read the article

  • Prevent collisions between mobs/npcs/units piloted by computer AI : How to avoid mobile obstacles?

    - by Arthur Wulf White
    Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2: Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3:Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from "walking on top of one another" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse.

    Read the article

  • Algorithm for procedural city generation?

    - by Zove Games
    I am planning on making a (simple) procedural city generator using Java. I need ideas on whan algorithm to use for the layout, and the actual buildings. The city will mostly have skyscrapers, not really much complex stuff. For the layout I already have a simple algorithm implemented: Create a Map with java.awt.Point keys and Integer values. Fill it with all the points in the city's bounds with the value as -1 (unnassigned) Shuffle the map, and assign the 1st 10 of the keys IDs (from 1-10) Loop until all points have IDs: Loop though all points: Assign points next to an assigned point IDs of the point next to them, if 2 or more points border the point, then randomly choose which ID the point will get. You will end up with 10 random regions. Make roads bordering these regions. Fill the inside of each region with a randomly spaced and randomly rotated grid PROBLEM: This is not the fastest way to do it. What algorithm should I use for the layout. And what should I use to make each building's design? I don't even know how I'm going to do that yet (fractals maybe). I just need some ideas, not actual code.

    Read the article

  • Path Modifier in Tower Of Defense Game

    - by Siddharth
    I implemented PathModifier for path of each enemy in my tower of defense game. So I applied fixed time to path modifier in that enemy complete their path. Like following code describe. new PathModifier(speed, path); Here speed define the time to complete the path. But in tower of defense game my problem is, there is a tower which slow down the movement of the enemy. In that particular situation I was stuck. please someone provide me some guidance what to do in this situation. EDIT : Path path = new Path(wayPointList.size()); for (int j = 0; j < wayPointList.size(); j++) { Point point = grid.getCellPoint(wayPointList.get(j).getRow(), wayPointList.get(j).getCol()); path.to(point.x, point.y); }

    Read the article

  • Intersection of player and mesh

    - by Will
    I have a 3D scene, and a player that can move about in it. In a time-step the player can move from point A to point B. The player should follow the terrain height but slow going up cliffs and then fall back, or stop when jumping and hitting a wall and so on. In my first prototype I determine the Y at the player's centre's X,Z by intersecting a ray with every triangle in the scene. I am not checking their path, but rather just sampling their end-point for each tick. Despite this being Javascript, it works acceptably performance-wise. However, because I am modeling the player as a single point, the player can position themselves so that they are half-in a cliff face and so on. I need to model them as as a solid e.g. some cluster of spheres or a even their fuller mesh. I am also concerned that if they were moving faster they might miss the test altogether. How should I solve this?

    Read the article

  • friending istream operator with class

    - by user1388172
    hello i'm trying to overload my operator >> to my class but i ecnouter an error in eclipse. code: friend istream& operator>>(const istream& is, const RAngle& ra){ return is >> ra.x >> ra.y; } code2: friend istream& operator>>(const istream& is, const RAngle& ra) { is >> ra.x; is >> ra.y; return is } Both crash and i don't know why, please help. EDIT: ra.x & ra.y are both 2 private ints of my class; Full error: error: ..\/rightangle.h: In function 'std::istream& operator>>(std::istream&, const RAngle&)': ..\/rightangle.h:65:12: error: ambiguous overload for 'operator>>' in 'is >> ra.RAngle::x' ..\/rightangle.h:65:12: note: candidates are: c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__istream_type& (*)(std::basic_istream<_CharT, _Traits>::__istream_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__istream_type& (*)(std::basic_istream<char>::__istream_type&) {aka std::basic_istream<char>& (*)(std::basic_istream<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__ios_type& (*)(std::basic_istream<_CharT, _Traits>::__ios_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>, std::basic_istream<_CharT, _Traits>::__ios_type = std::basic_ios<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__ios_type& (*)(std::basic_istream<char>::__ios_type&) {aka std::basic_ios<char>& (*)(std::basic_ios<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: no known conversion for argument 1 from 'const int' to 'std::ios_base& (*)(std::ios_base&)' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: std::basic_istream<_CharT, _Traits>& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__streambuf_type*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__streambuf_type = std::basic_streambuf<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__streambuf_type* {aka std::basic_streambuf<char>*}' ..\/rightangle.h:66:12: error: ambiguous overload for 'operator>>' in 'is >> ra.RAngle::y' ..\/rightangle.h:66:12: note: candidates are: c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__istream_type& (*)(std::basic_istream<_CharT, _Traits>::__istream_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__istream_type& (*)(std::basic_istream<char>::__istream_type&) {aka std::basic_istream<char>& (*)(std::basic_istream<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__ios_type& (*)(std::basic_istream<_CharT, _Traits>::__ios_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>, std::basic_istream<_CharT, _Traits>::__ios_type = std::basic_ios<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__ios_type& (*)(std::basic_istream<char>::__ios_type&) {aka std::basic_ios<char>& (*)(std::basic_ios<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: no known conversion for argument 1 from 'const int' to 'std::ios_base& (*)(std::ios_base&)' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: std::basic_istream<_CharT, _Traits>& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__streambuf_type*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__streambuf_type = std::basic_streambuf<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__streambuf_type* {aka std::basic_streambuf<char>*}''

    Read the article

  • vectorizing a for loop in numpy/scipy?

    - by user248237
    I'm trying to vectorize a for loop that I have inside of a class method. The for loop has the following form: it iterates through a bunch of points and depending on whether a certain variable (called "self.condition_met" below) is true, calls a pair of functions on the point, and adds the result to a list. Each point here is an element in a vector of lists, i.e. a data structure that looks like array([[1,2,3], [4,5,6], ...]). Here is the problematic function: def myClass: def my_inefficient_method(self): final_vector = [] # Assume 'my_vector' and 'my_other_vector' are defined numpy arrays for point in all_points: if not self.condition_met: a = self.my_func1(point, my_vector) b = self.my_func2(point, my_other_vector) else: a = self.my_func3(point, my_vector) b = self.my_func4(point, my_other_vector) c = a + b final_vector.append(c) # Choose random element from resulting vector 'final_vector' self.condition_met is set before my_inefficient_method is called, so it seems unnecessary to check it each time, but I am not sure how to better write this. Since there are no destructive operations here it is seems like I could rewrite this entire thing as a vectorized operation -- is that possible? any ideas how to do this?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >