Search Results

Search found 41789 results on 1672 pages for 'software development'.

Page 578/1672 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Game Formula/Mechanic

    - by Georgiadis Abraam
    i am trying to design a game for a project i have, The main idea is a 3 Type of Heroes 3 Stat per Hero There are no levels involved so the differences must be located on Stats. Flogic)The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week i am trying to find a stats based formula that will allow me to fix this but i cant, i was meddling with numbers yesterday and it was decent but i cant extract the formula out of it. Could you plz guide me or give me hints on how should i start creating formulas on a Non lvl game that fulfills the fLogic?

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • PCF shadow shader math causing artifacts

    - by user2971069
    For a while now I used PCSS for my shadow technique of choice until I discovered a type of percentage closer filtering. This method creates really smooth shadows and with hopes of improving performance, with only a fraction of texture samples, I tried to implement PCF into my shader. This is the relevant code: float c0, c1, c2, c3; float f = blurFactor; float2 coord = ProjectedTexCoords; if (receiverDistance - tex2D(lightSampler, coord + float2(0, 0)).x > 0.0007) c0 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, 0)).x > 0.0007) c1 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(0, f)).x > 0.0007) c2 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, f)).x > 0.0007) c3 = 1; coord = (coord % f) / f; return 1 - (c0 * (1 - coord.x) * (1 - coord.y) + c1 * coord.x * (1 - coord.y) + c2 * (1 - coord.x) * coord.y + c3 * coord.x * coord.y); This is a very basic implementation. blurFactor is initialized with 1 / LightTextureSize. So the if statements fetch the occlusion values for the four adjacent texels. I now want to weight each value based on the actual position of the texture coordinate. If it's near the bottom-right pixel, that occlusion value should be preferred. The weighting itself is done with a simple bilinear interpolation function, however this function takes a 2d vector in the range [0..1] so I have to convert my texture coordinate to get the distance from my first pixel to the second one in range [0..1]. For that I used the mod operator to get it into [0..f] range and then divided by f. This code makes sense to me, and for specific blurFactors it works, producing really smooth one pixel wide shadows, but not for all blurFactors. Initially blurFactor is (1 / LightTextureSize) to sample the 4 adjacent texels. I now want to increase the blurFactor by factor x to get a smooth interpolation across maybe 4 or so pixels. But that is when weird artifacts show up. Here is an image: Using a 1x on blurFactor produces a good result, 0.5 is as expected not so smooth. 2x however doesn't work at all. I found that only a factor of 1/2^n produces an good result, every other factor produces artifacts. I'm pretty sure the error lies here: coord = (coord % f) / f; Maybe the modulo is not calculated correctly? I have no idea how to fix that. Is it even possible for pixel that are further than 1 pixel away?

    Read the article

  • Arcball 3D camera - how to convert from camera to object coordinates

    - by user38873
    I have checked multiple threads before posting, but i havent been able to figure this one out. Ok so i have been following this tutorial, but im not using glm, ive been implementing everything up until now, like lookat etc. http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Arcball So i can rotate with the click and drag of the mouse, but when i rotate 90º degrees around Y and then move the mouse upwards or donwwards, it rotates on the wrong axis, this problem is demonstrated on this part of the tutorial An extra trick is converting the rotation axis from camera coordinates to object coordinates. It's useful when the camera and object are placed differently. For instace, if you rotate the object by 90° on the Y axis ("turn its head" to the right), then perform a vertical move with your mouse, you make a rotation on the camera X axis, but it should become a rotation on the Z axis (plane barrel roll) for the object. By converting the axis in object coordinates, the rotation will respect that the user work in camera coordinates (WYSIWYG). To transform from camera to object coordinates, we take the inverse of the MV matrix (from the MVP matrix triplet). What i have to do acording to the tutorial is convert my axis_in_camera_coordinates to object coordinates, and the rotation is done well, but im confused on what matrix i use to do just that. The tutorial talks about converting the axis from camera to object coordinates by using the inverse of the MV. Then it shows these 3 lines of code witch i havent been able to understand. glm::mat3 camera2object = glm::inverse(glm::mat3(transforms[MODE_CAMERA]) * glm::mat3(mesh.object2world)); glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord; So what do i aply to my calculated axis?, the inverse of what, i supose the inverse of the model view? So my question is how do you transform camera axis to object axis. Do i apply the inverse of the lookat matrix? My code: if (cur_mx != last_mx || cur_my != last_my) { va = get_arcball_vector(last_mx, last_my); vb = get_arcball_vector( cur_mx, cur_my); angle = acos(min(1.0f, dotProduct(va, vb)))*20; axis_in_camera_coord = crossProduct(va, vb); axis.x = axis_in_camera_coord[0]; axis.y = axis_in_camera_coord[1]; axis.z = axis_in_camera_coord[2]; axis.w = 1.0f; last_mx = cur_mx; last_my = cur_my; } Quaternion q = qFromAngleAxis(angle, axis); Matrix m; qGLMatrix(q,m); vi = mMultiply(m, vi); up = mMultiply(m, up); ViewMatrix = ogLookAt(vi.x, vi.y, vi.z,0,0,0,up.x,up.y,up.z);

    Read the article

  • How do I implement a score database in Android?

    - by Michael Seun Araromi
    I making a 2D game for Android using OpenGL-ES technology. It is a space shooting game where the player shoots enemy ships. I want to keep a track of score for the amount of enemy ships destroyed and a record of a local highscore. The score should be incremented whenever an enemy is destroyed. I also want a way of displaying both the current score and highscore on the game screen. I am not familiar with databases at all and I will appreciate a clear answer or a link to a good tutorial for my cause. Thanks.

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • How to fetch only the sprites in the player's range of motion for collision testing? (2D, axis aligned sprites)

    - by Twodordan
    I am working on a 2D sprite game for educational purposes. (In case you want to know, it uses WebGl and Javascript) I've implemented movement using the Euler method (and delta time) to keep things simple. Now I'm trying to tackle collisions. The way I wrote things, my game only has rectangular sprites (axis aligned, never rotated) of various/variable sizes. So I need to figure out what I hit and which side of the target sprite I hit (and I'm probably going to use these intersection tests). The old fashioned method seems to be to use tile based grids, to target only a few tiles at a time, but that sounds silly and impractical for my game. (Splitting the whole level into blocks, having each sprite's bounding box fit multiple blocks I might abide. But if the sprites change size and move around, you have to keep changing which tiles they belong to, every frame, it doesn't sound right.) In Flash you can test collision under one point, but it's not efficient to iterate through all the elements on stage each frame. (hence why people use the tile method). Bottom line is, I'm trying to figure out how to test only the elements within the player's range of motion. (I know how to get the range of motion, I have a good idea of how to write a collisionCheck(playerSprite, targetSprite) function. But how do I know which sprites are currently in the player's vicinity to fetch only them?) Please discuss. Cheers!

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • Algorithm for waypoint path following?

    - by Thierry Savard Saucier
    I have a worldmap, with different cities on it. The player can choose a city from a menu, or click on an available cities on the world map, and the toon should walk over there. I want him to follow a predefined path. Lets say our hero is on the city 1. He clicks on city 4. I want him to follow the path to city 2 and from there to city 4. I was handling this easily with arrow movement (left right top bottom) since its a single check. Now I'm not sure how I should do this. Should I loop threw each possible path and check which one leads me to D the fastest ... and if I do how do I avoid running in circle forever with cities 1-5-2 ?

    Read the article

  • How does interpolation actually work to smooth out an object's movement?

    - by user22241
    I've asked a few similar questions over the past 8 months or so with no real joy, so I am going make the question more general. I have an Android game which is OpenGL ES 2.0. within it I have the following Game Loop: My loop works on a fixed time step principle (dt = 1 / ticksPerSecond) loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ updateLogic(dt); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; } render(); My intergration works like this: sprite.posX+=sprite.xVel*dt; sprite.posXDrawAt=sprite.posX*width; Now, everything works pretty much as I would like. I can specify that I would like an object to move across a certain distance (screen width say) in 2.5 seconds and it will do just that. Also because of the frame skipping that I allow in my game loop, I can do this on pretty much any device and it will always take 2.5 seconds. Problem However, the problem is that when a render frame skips, the graphic stutter. It's extremely annoying. If I remove the ability to skip frames, then everything is smooth as you like, but will run at different speeds on different devices. So it's not an option. I'm still not sure why the frame skips, but I would like to point out that this is Nothing to do with poor performance, I've taken the code right back to 1 tiny sprite and no logic (apart from the logic required to move the sprite) and I still get skipped frames. And this is on a Google Nexus 10 tablet (and as mentioned above, I need frame skipping to keep the speed consistent across devices anyway). So, the only other option I have is to use interpolation (or extrapolation), I've read every article there is out there but none have really helped me to understand how it works and all of my attempted implementations have failed. Using one method I was able to get things moving smoothly but it was unworkable because it messed up my collision. I can foresee the same issue with any similar method because the interpolation is passed to (and acted upon within) the rendering method - at render time. So if Collision corrects position (character now standing right next to wall), then the renderer can alter it's position and draw it in the wall. So I'm really confused. People have said that you should never alter an object's position from within the rendering method, but all of the examples online show this. So I'm asking for a push in the right direction, please do not link to the popular game loop articles (deWitters, Fix your timestep, etc) as I've read these multiple times. I'm not asking anyone to write my code for me. Just explain please in simple terms how Interpolation actually works with some examples. I will then go and try to integrate any ideas into my code and will ask more specific questions if need-be further down the line. (I'm sure this is a problem many people struggle with).

    Read the article

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

  • How to port animation from one skeleton to another?

    - by shawn
    While I need to do this in a Blender3D modeler script, the math should be similar for other modelers or realtime engines. Blender3D specific terminology: Armature = skeleton EditBone = rest pose bone (stores the rest pose matrix) PoseBone = can store a different pose (animation matrix) for each frame of your animation I need to share animations (Blender Actions) between Armatures which have EditBones with same names and which have the same positions, but can have different (rest pose) angles and scales. Plus the Armatures might have different bone hierarchy (bone parenting/ no bone parenting). Why I need this: I've made an importer/exporter for a 3d format for a game. The format doesn't store enough info to connect/parent the bones, which makes posing/animating character models in a 3d modeller nearly impossible (original model files for the 3d modeler don't exist, this is for modding). As there are only 2 character skeleton types in the game, I decided to optionally allow to generate the bone from a hardcoded data in the model importer and undo that in the exporter. This allows to easily pose the model for checking weights, easily create weights, makes it easier for Blender to generate automatic weights and of course makes animating possible. This worked perfectly: the importer optionally generated the Armature itself and the exporter removed those changes, so the exported model works with existing animations in the game. But now I'm writing an importer and exporter for the game's animation format and here come the problems of: Trying to make original animations work in Blender with my "custom" (modified) Armature Trying to make animations created by using the "custom" (modified) Armature work with the original models in the game (and Blender). Constraints or bone snapping inside Blender won't work as they don't care that the bones have different angles in the rest pose, they will still face the same direction. It seems I just need to get the "difference" between the EditBone matrices of all EditBones for the two Armatures somehow and apply that difference to PoseBone matrices of all PoseBones, for all frames of my animation. I need to know how to get that difference and how to apply it. BTW, PoseBone matrices are relative to rest pose, they are by default [1.000000, 0.000000, 0.000000, 0.000000](matrix [row 0]) [0.000000, 1.000000, 0.000000, 0.000000](matrix [row 1]) [0.000000, 0.000000, 1.000000, 0.000000](matrix [row 2]) [0.000000, 0.000000, 0.000000, 1.000000](matrix [row 3]) So the question is: How to get the difference between two bone (EditBone) matrices to apply that difference to the animation matrices (PoseBone matrices)? Please be easy on the matrix math.

    Read the article

  • Extra fire simulation on iPad device

    - by Nezam
    I have with me an iOS app for iPad which creates a few fire simulations over a png.Well,its working well exactly how we wanted it but when we are testing it on a device,we get an extra fire simulation.Heres the screen: iPad Simulator: This is how it should display (iPad Simulation) iPad Device: This is how its displaying (iPad Device) M ready to share whichever portion of my code which gets me to my solution once someone gets hit here.Thanks in advance

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • I need to sell an almost-complete MMORPG project. How can I do that?

    - by Tomasz
    I need your help. We have to sell MMORPG at an advanced stage. The game has a unique engine, written on the need for the game, graphics, sound, map editor, web site etc. As it happens in the play mmorpg we can develop the characters, monsters. We can fight with other characters or to establish cooperation in solving the challenges. We can fight using own monsters, or throwing their own cards with spells. Unfortunately we have no idea how to promote the game. Ended fund and I think the whole team surrendered. How can I find a buyer? Where can I find him? Thank you for your help.

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • 2D isometric: screen to tile coordinates

    - by Dr_Asik
    I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing: where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. My notation for coordinates is (y, x) which may be confusing, sorry about that. The best I could figure out so far is this: int xtemp = xs / (W / 2); int ytemp = ys / (H / 2); int xt = (xs - ys) / 2; int yt = ytemp + xt; This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this. Thanks!

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • How do I make my rain effect look more like rain and less like snowfall?

    - by Nikhil Lamba
    I am making a game in that game I want a rain effect. I am little bit far from this right now. I am creating the rain effect like below: particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But it looks like snowfall! What changes can I do for it to look more like rain? EDIT Here is a screenshot:

    Read the article

  • Unity GUI not in build, but works fine in editor

    - by Darren
    I have: GUITexture attached to an object A script that has GUIStyles created for the Textfield and Buttons that are created in OnGUI(). This script is attached to the same object in number 1 3 GUIText objects each separate from the above. A script that enables the GUITexture and the script in number 1 and 2 respectively This is how it is supposed to work: When I cross the finish line, number 4 script enables number 1 GUITexture component and number 2 script component. The script component uses one of number 3's GUIText objects to show you your best lap time, and also makes a GUI.Textfield for name entry and 2 GUI.Buttons for "Submit" and "Skip". If you hit "Submit" the script will submit the time. No matter which button you press, The remaining 2 GUIText objects from number 3 will show you the top 10 best times. For some reason, when I run it in editor, everything works 100%, but when I'm in different kinds of builds, the results vary. When I am in a webplayer, The GUITexture and the textfield and buttons appear, but the textfield and buttons are plain and have no evidence of GUIStyles. When I click one of the buttons, the score gets submitted but I do not get the fastest times showing. When I am in a standalone build, the GUITexture shows up, but nothing else does. If I remove the GUIStyle parameter of the GUI.Textfield and GUI.Button, they show up. Why am I getting these variations and how can I fix it? Code below: void Start () { Names.text = ""; Times.text = ""; YourBestTime.text = "Your Best Lap: " + bestTime + "\nEnter your name:"; //StartCoroutine(GetTimes("Test")); } void Update() { if (!ShowButtons && !GettingTimes) { StartCoroutine(GetTimes()); GettingTimes = true; } } IEnumerator GetTimes () { Debug.Log("Getting times"); YourBestTime.text = "Loading Best Lap Times"; WWW times_get = new WWW(GetTimesUrl); yield return times_get; WWW names_get = new WWW(GetNamesUrl); yield return names_get; if(times_get.error != null || names_get.error != null) { print("There was an error retrieiving the data: " + names_get.error + times_get.error); } else { Times.text = times_get.text; Names.text = names_get.text; YourBestTime.text = "Your Best Lap: " + bestTime; } } IEnumerator PostLapTime (string Name, string LapTime) { string hash= MD5.Md5Sum(Name + LapTime + secretKey); string bestTime_url = SubmitTimeUrl + "&Name=" + WWW.EscapeURL(Name) + "&LapTime=" + LapTime + "&hash=" + hash; Debug.Log (bestTime_url); // Post the URL to the site and create a download object to get the result. WWW hs_post = new WWW(bestTime_url); //label = "Submitting..."; yield return hs_post; // Wait until the download is done if (hs_post.error != null) { print("There was an error posting the lap time: " + hs_post.error); //label = "Error: " + hs_post.error; //show = false; } else { Debug.Log("Posted: " + hs_post.text); ShowButtons = false; PostingTime = false; } } void OnGUI() { if (ShowButtons) { //makes text box nameString = GUI.TextField( new Rect((Screen.width/2)-111, (Screen.height/2)-130, 222, 25), nameString, 20, TextboxStyle); if (GUI.Button( new Rect( (Screen.width/2-74.0f), (Screen.height/2)- 90, 64, 32), "Submit", ButtonStyle)) { //SUBMIT TIME if (nameString == "") { nameString = "Player"; } if (!PostingTime) { StartCoroutine(PostLapTime(nameString, bestTime)); PostingTime = true; } } else if (GUI.Button( new Rect( (Screen.width/2+10.0f), (Screen.height/2)- 90, 64, 32), "Skip", ButtonStyle)) { ShowButtons = false; } } } }

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >