Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 28/32 | < Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SDL Bullet Movement

    - by Code Assasssin
    I'm currently working on my first space shooter, and I'm in the process of making my ship shoot some bullets/lasers. Unfortunately, I'm having a hard time getting the bullets to fly vertically. I'm a total noob when it comes to this so you might have a hard time understanding my code :/ // Position Bullet Function projectilex = x + 17; projectiley = y + -20; if(keystates[SDLK_SPACE]) { alive = true; } And here's my show function if(alive) { if(frame == 2) { frame = 0; } apply_surface(projectilex,projectiley,ShootStuff,screen,&lazers[frame]); frame++; projectiley + 1; } I'm trying to get the bullet to fly vertically... and I have no clue how to do that. I've tried messing with the y coordinate but that makes things worse. The laser/bullet just follows the ship :( How would I get it to fire at the starting position and keep going in a vertical line without it following the ship? int main( int argc, char* args[] ) { Player p; Timer fps; bool quit = false; if( init() == false ) { return 1; } //Load the files if( load_files() == false ) { return 1; } clip[ 0 ].x = 0; clip[ 0 ].y = 0; clip[ 0 ].w = 30; clip[ 0 ].h = 36; clip[ 1 ].x = 31; clip[ 1 ].y = 0; clip[ 1 ].w = 39; clip[ 1 ].h = 36; clip[ 2 ].x = 71; clip[ 2 ].y = 0; clip[ 2 ].w = 29; clip[ 2 ].h = 36; lazers [ 0 ].x = 0; lazers [ 0 ].y = 0; lazers [ 0 ].w = 3; lazers [ 0 ].h = 9; lazers [ 1 ].x = 5; lazers [ 1 ].y = 0; lazers [ 1 ].w = 3; lazers [ 1 ].h = 7; while( quit == false ) { fps.start(); //While there's an event to handle while( SDL_PollEvent( &event ) ) { p.handle_input(); //If a key was pressed //If the user has Xed out the window if( event.type == SDL_QUIT ) { //Quit the program quit = true; } } //Scroll background bgX -= 8; //If the background has gone too far if( bgX <= -GameBackground->w ) { //Reset the offset bgX = 0; } p.move(); apply_surface( bgX, bgY,GameBackground, screen ); apply_surface( bgX + GameBackground->w, bgY, GameBackground, screen ); apply_surface(0,0, FullHealthBar,screen); p.shoot(); p.show(); //Apply the message //Update the screen if( SDL_Flip( screen ) == -1 ) { return 1; } SDL_Flip(GameBackground); if( fps.get_ticks() < 1000 / FRAMES_PER_SECOND ) { SDL_Delay( ( 1000 / FRAMES_PER_SECOND ) - fps.get_ticks() ); } } //Clean up clean_up(); return 0; }

    Read the article

  • C# 4.0: Covariance And Contravariance In Generics

    - by Paulo Morgado
    C# 4.0 (and .NET 4.0) introduced covariance and contravariance to generic interfaces and delegates. But what is this variance thing? According to Wikipedia, in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometrical or physical entities changes when passing from one coordinate system to another.(*) But what does this have to do with C# or .NET? In type theory, a the type T is greater (>) than type S if S is a subtype (derives from) T, which means that there is a quantitative description for types in a type hierarchy. So, how does covariance and contravariance apply to C# (and .NET) generic types? In C# (and .NET), variance applies to generic type parameters and not to the resulting generic type. A generic type parameter is: covariant if the ordering of the generic types follows the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. contravariant if the ordering of the generic types is reversed from the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. invariant if neither of the above apply. If this definition is applied to arrays, we can see that arrays have always been covariant because this is valid code: object[] objectArray = new string[] { "string 1", "string 2" }; objectArray[0] = "string 3"; objectArray[1] = new object(); However, when we try to run this code, the second assignment will throw an ArrayTypeMismatchException. Although the compiler was fooled into thinking this was valid code because an object is being assigned to an element of an array of object, at run time, there is always a type check to guarantee that the runtime type of the definition of the elements of the array is greater or equal to the instance being assigned to the element. In the above example, because the runtime type of the array is array of string, the first assignment of array elements is valid because string = string and the second is invalid because string = object. This leads to the conclusion that, although arrays have always been covariant, they are not safely covariant – code that compiles is not guaranteed to run without errors. In C#, the way to define that a generic type parameter as covariant is using the out generic modifier: public interface IEnumerable<out T> { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> { T Current { get; } bool MoveNext(); } Notice the convenient use the pre-existing out keyword. Besides the benefit of not having to remember a new hypothetic covariant keyword, out is easier to remember because it defines that the generic type parameter can only appear in output positions — read-only properties and method return values. In a similar way, the way to define a type parameter as contravariant is using the in generic modifier: public interface IComparer<in T> { int Compare(T x, T y); } Once again, the use of the pre-existing in keyword makes it easier to remember that the generic type parameter can only be used in input positions — write-only properties and method non ref and non out parameters. Because covariance and contravariance apply only to the generic type parameters, a generic type definition can have both covariant and contravariant generic type parameters in its definition: public delegate TResult Func<in T, out TResult>(T arg); A generic type parameter that is not marked covariant (out) or contravariant (in) is invariant. All the types in the .NET Framework where variance could be applied to its generic type parameters have been modified to take advantage of this new feature. In summary, the rules for variance in C# (and .NET) are: Variance in type parameters are restricted to generic interface and generic delegate types. A generic interface or generic delegate type can have both covariant and contravariant type parameters. Variance applies only to reference types; if you specify a value type for a variant type parameter, that type parameter is invariant for the resulting constructed type. Variance does not apply to delegate combination. That is, given two delegates of types Action<Derived> and Action<Base>, you cannot combine the second delegate with the first although the result would be type safe. Variance allows the second delegate to be assigned to a variable of type Action<Derived>, but delegates can combine only if their types match exactly. If you want to learn more about variance in C# (and .NET), you can always read: Covariance and Contravariance in Generics — MSDN Library Exact rules for variance validity — Eric Lippert Events get a little overhaul in C# 4, Afterward: Effective Events — Chris Burrows Note: Because variance is a feature of .NET 4.0 and not only of C# 4.0, all this also applies to Visual Basic 10.

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Frameskipping in Android gameloop causing choppy sprites (Open GL ES 2.0)

    - by user22241
    I have written a simple 2d platform game for Android and am wondering how one deals with frame-skipping? Are there any alternatives? Let me explain further. So, my game loop allows for the rendering to be skipped if game updates and rendering do not fit into my fixed time-slice (16.667ms). This allows my game to run at identically perceived speeds on different devices. And this works great, things do run at the same speed. However, when the gameloop skips a render call for even one frame, the sprite glitches. And thinking about it, why wouldn't it? You're seeing a sprite move say, an average of 10 pixels every 1.6 seconds, then suddenly, there is a pause of 3.2ms, and the sprite then appears to jump 20 pixels. When this happens 3 or 4 times in close succession, the result is very ugly and not something I want in my game. Therfore, my question is how does one deal with these 'pauses' and 'jumps' - I've read every article on game loops I can find (see below) and my loops are even based off of code from these articles. The articles specifically mention frame skipping but they don't make any reference to how to deal with visual glitches that result from it. I've attempted various game-loops. My loop must have a mechanism in-place to allow rendering to be skipped to keep game-speed constant across multiple devices (or alternative, if one exists) I've tried interpolation but this doesn't eliminate this specific problem (although it looks like it may mitigate the issue slightly as when it eventually draws the sprite it 'moves it back' between the old and current positions so the 'jump' isn't so big. I've also tried a form of extrapolation which does seem to keep things smooth considerably, but I find it to be next to completely useless because it plays havoc with my collision detection (even when drawing with a 'display only' coordinate - see extrapolation-breaks-collision-detection) I've tried a loop that uses Thread.sleep when drawing / updating completes with time left over, no frame skipping in this one, again fairly smooth, but runs differently on different devices so no good. And I've tried spawning my own, third thread for logic updates, but this, was extremely messy to deal with and the performance really wasn't good. (upon reading tons of forums, most people seem to agree a 2 thread loops ( so UI and GL threads) is safer / easier). Now if I remove frame skipping, then all seems to run nice and smooth, with or without inter/extrapolation. However, this isn't an option because the game then runs at different speeds on different devices as it falls behind from not being able to render fast enough. I'm running logic at 60 Ticks per second and rendering as fast as I can. I've read, as far as I can see every article out there, I've tried the loops from My Secret Garden and Fix your timestep. I've also read: Against the grain deWITTERS Game Loop Plus various other articles on Game-loops. A lot of the others are derived from the above articles or just copied word for word. These are all great, but they don't touch on the issues I'm experiencing. I really have tried everything I can think of over the course of a year to eliminate these glitches to no avail, so any and all help would be appreciated. A couple of examples of my game loops (Code follows): From My Secret Room public void onDrawFrame(GL10 gl) { //Rre-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } And from Fix your timestep double t = 0.0; double dt2 = 0.01; double currentTime = System.currentTimeMillis()*0.001; double accumulator = 0.0; double newTime; double frameTime; @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*5)) //Allow 5 'skips' frameTime = (dt*5); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(interpolation); }

    Read the article

  • Weird y offset when using custom frag shader (Cocos2d-x)

    - by Mister Guacamole
    I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader. This is the init method of the sprite (GroundZone) I want to mask: bool GroundZone::initWithSize(Size size) { // [...] // Setup the mask of the sprite m_mask = RenderTexture::create(textureWidth, textureHeight); m_mask->retain(); m_mask->setKeepMatrix(true); Texture2D *maskTexture = m_mask->getSprite()->getTexture(); maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask // Load the custom frag shader with a default vert shader as the sprite’s program FileUtils *fileUtils = FileUtils::getInstance(); string vertexSource = ccPositionTextureA8Color_vert; string fragmentSource = fileUtils->getStringFromFile( fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh")); GLProgram *shader = new GLProgram; shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str()); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS); shader->link(); CHECK_GL_ERROR_DEBUG(); shader->updateUniforms(); CHECK_GL_ERROR_DEBUG(); int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture"); shader->setUniformLocationWith1i(maskTexUniformLoc, 1); this->setShaderProgram(shader); shader->release(); // [...] } These are the custom drawing methods for actually drawing the mask over the sprite: You need to know that m_mask is modified externally by another class, the onDraw() method only render it. void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) { m_renderCommand.init(_globalZOrder); m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated); renderer->addCommand(&m_renderCommand); Sprite::draw(renderer, transform, transformUpdated); } void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) { GLProgram *shader = this->getShaderProgram(); shader->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName()); glActiveTexture(GL_TEXTURE0); } Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left). void GroundLayer::drawTunnel(Point start, Point end) { // To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the // relative position of the start and end point in the zone's node space. Finally we use the custom shader to // draw a mask over the existing texture. for (auto it = _children.begin(); it != _children.end(); it++) { GroundZone *zone = static_cast<GroundZone *>(*it); Point nodeStart = zone->convertToNodeSpace(start); Point nodeEnd = zone->convertToNodeSpace(end); // Now that we have our two points converted to node space, it's easy to draw a mask that contains a line // going from the start point to the end point and that is then applied over the current texture. Size groundZoneSize = zone->getContentSize(); RenderTexture *rt = zone->getMask(); rt->begin(); { // Draw a line going from start and going to end in the texture, the line will act as a mask over the // existing texture DrawNode *line = DrawNode::create(); line->retain(); line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED); line->visit(); } rt->end(); } } Finally, here's the custom shader I wrote. #ifdef GL_ES precision mediump float; #endif varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_alphaMaskTexture; void main() { float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a; float texAlpha = texture2D(u_texture, v_texCoord).a; float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible vec3 texColor = texture2D(u_texture, v_texCoord).rgb; gl_FragColor = vec4(texColor, blendAlpha); return; } I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place: Without custom shader (the sprite is the brown thing): With custom shader: What's going on here? Thanks :) EDIT It looks like after passing through the shader when I set the position of the sprite I set it in points, with (0,0) being in the top-right. Indeed, when I do sprite->setPosition(320, 480), the sprite is perfectly placed at the top of the screen.

    Read the article

  • Off center projection

    - by N0xus
    I'm trying to implement the code that was freely given by a very kind developer at the following link: http://forum.unity3d.com/threads/142383-Code-sample-Off-Center-Projection-Code-for-VR-CAVE-or-just-for-fun Right now, all I'm trying to do is bring it in on one camera, but I have a few issues. My class, looks as follows: using UnityEngine; using System.Collections; public class PerspectiveOffCenter : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } public static Matrix4x4 GeneralizedPerspectiveProjection(Vector3 pa, Vector3 pb, Vector3 pc, Vector3 pe, float near, float far) { Vector3 va, vb, vc; Vector3 vr, vu, vn; float left, right, bottom, top, eyedistance; Matrix4x4 transformMatrix; Matrix4x4 projectionM; Matrix4x4 eyeTranslateM; Matrix4x4 finalProjection; ///Calculate the orthonormal for the screen (the screen coordinate system vr = pb - pa; vr.Normalize(); vu = pc - pa; vu.Normalize(); vn = Vector3.Cross(vr, vu); vn.Normalize(); //Calculate the vector from eye (pe) to screen corners (pa, pb, pc) va = pa-pe; vb = pb-pe; vc = pc-pe; //Get the distance;; from the eye to the screen plane eyedistance = -(Vector3.Dot(va, vn)); //Get the varaibles for the off center projection left = (Vector3.Dot(vr, va)*near)/eyedistance; right = (Vector3.Dot(vr, vb)*near)/eyedistance; bottom = (Vector3.Dot(vu, va)*near)/eyedistance; top = (Vector3.Dot(vu, vc)*near)/eyedistance; //Get this projection projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); //Fill in the transform matrix transformMatrix = new Matrix4x4(); transformMatrix[0, 0] = vr.x; transformMatrix[0, 1] = vr.y; transformMatrix[0, 2] = vr.z; transformMatrix[0, 3] = 0; transformMatrix[1, 0] = vu.x; transformMatrix[1, 1] = vu.y; transformMatrix[1, 2] = vu.z; transformMatrix[1, 3] = 0; transformMatrix[2, 0] = vn.x; transformMatrix[2, 1] = vn.y; transformMatrix[2, 2] = vn.z; transformMatrix[2, 3] = 0; transformMatrix[3, 0] = 0; transformMatrix[3, 1] = 0; transformMatrix[3, 2] = 0; transformMatrix[3, 3] = 1; //Now for the eye transform eyeTranslateM = new Matrix4x4(); eyeTranslateM[0, 0] = 1; eyeTranslateM[0, 1] = 0; eyeTranslateM[0, 2] = 0; eyeTranslateM[0, 3] = -pe.x; eyeTranslateM[1, 0] = 0; eyeTranslateM[1, 1] = 1; eyeTranslateM[1, 2] = 0; eyeTranslateM[1, 3] = -pe.y; eyeTranslateM[2, 0] = 0; eyeTranslateM[2, 1] = 0; eyeTranslateM[2, 2] = 1; eyeTranslateM[2, 3] = -pe.z; eyeTranslateM[3, 0] = 0; eyeTranslateM[3, 1] = 0; eyeTranslateM[3, 2] = 0; eyeTranslateM[3, 3] = 1f; //Multiply all together finalProjection = new Matrix4x4(); finalProjection = Matrix4x4.identity * projectionM*transformMatrix*eyeTranslateM; //finally return return finalProjection; } // Update is called once per frame public void FixedUpdate () { Camera cam = camera; //calculate projection Matrix4x4 genProjection = GeneralizedPerspectiveProjection( new Vector3(0,1,0), new Vector3(1,1,0), new Vector3(0,0,0), new Vector3(0,0,0), cam.nearClipPlane, cam.farClipPlane); //(BottomLeftCorner, BottomRightCorner, TopLeftCorner, trackerPosition, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = genProjection; } } My error lies in projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); The debugger states: Expression denotes a `type', where a 'variable', 'value' or 'method group' was expected. Thus, I changed the line to read: projectionM = new PerspectiveOffCenter(left, right, bottom, top, near, far); But then the error is changed to: The type 'PerspectiveOffCenter' does not contain a constructor that takes '6' arguments. For reasons that are obvious. So, finally, I changed the line to read: projectionM = new GeneralizedPerspectiveProjection(left, right, bottom, top, near, far); And the error I get is: is a 'method' but a 'type' was expected. With this last error, I'm not sure what it is I should do / missing. Can anyone see what it is that I'm missing to fix this error?

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • concurrency::index<N> from amp.h

    - by Daniel Moth
    Overview C++ AMP introduces a new template class index<N>, where N can be any value greater than zero, that represents a unique point in N-dimensional space, e.g. if N=2 then an index<2> object represents a point in 2-dimensional space. This class is essentially a coordinate vector of N integers representing a position in space relative to the origin of that space. It is ordered from most-significant to least-significant (so, if the 2-dimensional space is rows and columns, the first component represents the rows). The underlying type is a signed 32-bit integer, and component values can be negative. The rank field returns N. Creating an index The default parameterless constructor returns an index with each dimension set to zero, e.g. index<3> idx; //represents point (0,0,0) An index can also be created from another index through the copy constructor or assignment, e.g. index<3> idx2(idx); //or index<3> idx2 = idx; To create an index representing something other than 0, you call its constructor as per the following 4-dimensional example: int temp[4] = {2,4,-2,0}; index<4> idx(temp); Note that there are convenience constructors (that don’t require an array argument) for creating index objects of rank 1, 2, and 3, since those are the most common dimensions used, e.g. index<1> idx(3); index<2> idx(3, 6); index<3> idx(3, 6, 12); Accessing the component values You can access each component using the familiar subscript operator, e.g. One-dimensional example: index<1> idx(4); int i = idx[0]; // i=4 Two-dimensional example: index<2> idx(4,5); int i = idx[0]; // i=4 int j = idx[1]; // j=5 Three-dimensional example: index<3> idx(4,5,6); int i = idx[0]; // i=4 int j = idx[1]; // j=5 int k = idx[2]; // k=6 Basic operations Once you have your multi-dimensional point represented in the index, you can now treat it as a single entity, including performing common operations between it and an integer (through operator overloading): -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, -=,%, *, /, +, -. There are also operator overloads for operations between index objects, i.e. ==, !=, +=, -=, +, –. Here is an example (where no assertions are broken): index<2> idx_a; index<2> idx_b(0, 0); index<2> idx_c(6, 9); _ASSERT(idx_a.rank == 2); _ASSERT(idx_a == idx_b); _ASSERT(idx_a != idx_c); idx_a += 5; idx_a[1] += 3; idx_a++; _ASSERT(idx_a != idx_b); _ASSERT(idx_a == idx_c); idx_b = idx_b + 10; idx_b -= index<2>(4, 1); _ASSERT(idx_a == idx_b); Usage You'll most commonly use index<N> objects to index into data types that we'll cover in future posts (namely array and array_view). Also when we look at the new parallel_for_each function we'll see that an index<N> object is the single parameter to the lambda, representing the (multi-dimensional) thread index… In the next post we'll go beyond being able to represent an N-dimensional point in space, and we'll see how to define the N-dimensional space itself through the extent<N> class. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How customers view and interact with a company

    The Harvard Business Review article written by Rayport and Jaworski is aptly titled “Best Face Forward” because it sheds light on how customers view and interact with a company. In the past most business interaction between customers was performed in a face to face meeting where one party would present an item for sale and then the other would decide whether to purchase the item. In addition, if there was a problem with a purchased item then they would bring the item back to the person who sold the item for resolution. One of my earliest examples of witnessing this was when I was around 6 or 7 years old and I was allowed to spend the summer in Tennessee with my Grandparents. My Grandfather had just written a book about the local history of his town and was selling them to his friends and local bookstores. I still remember he offered to pay me a small commission for every book I helped him sell because I was carrying the books around for him. Every sale he made was face to face with his customers which allowed him to share his excitement for the book with everyone. In today’s modern world there is less and less human interaction as the use of computers and other technologies allow us to communicate within seconds even though both parties may be across the globe or just next door. That being said, customers view a company through multiple access points called faces that represent the ability to interact without actually seeing a human face. As a software engineer this is a good and a bad thing because direct human interaction and technology based interaction have both good and bad attributes based on the customer. How organizations coordinate business and IT functions, to provide quality service varies based on each individual business and the goals and directives put in place by its management. According to Rayport and Jaworski, the type of interaction used through a particular access point may lend itself to be people-dominate, machine-dominate, or a combination of both. The method by which a company communicates information through an access point is a strategic choice that relates costs and customer outcomes. To simplify this, the choice is based on what can give the customer the best experience interacting with the company when the cost of the interaction is also a factor. I personally see examples of this every day at work. The company website is machine-dominate with people updating and maintaining information, our groups department is people dominate because most of the customer interaction is done at the customers location and is backed up by machine based data sources, and our sales/member service department is a hybrid because employees work in tandem with machines in order for them to assist customers with signing up or any other issue they may have. The positive and negative aspects of human and machine interfaces are a key aspect in deciding which interface to use when allowing customers to access a company or a combination of the two. Rayport and Jaworski also used MIT professor Erik Brynjolfsson preliminary catalog of human and machine strengths. He stated that humans outperform machines in judgment, pattern recognition, exception processing, insight, and creativity. I have found this to be true based on the example of how sales and member service reps at my company handle a multitude of questions and various situations with a lot of unknown variables. A machine interface could never effectively be able to handle these scenarios because there are too many variables to consider and would not have the built-in logic to process each customer’s claims and needs. In addition, he also stated that machines outperform humans in collecting, storing, transmitting and routine processing. An example of this would be my employer’s website. Customers can simply go online and purchase a product without even talking to a sales or member services representative. The information is then stored in a database so that the customer can always go back and review there order, and access their selected services. A human, no matter how smart they are would never be able to keep track of hundreds of thousands of customers let alone know what they purchased or how much they paid. In today’s technology driven economy every company must offer their customers multiple methods of accessibly in order to survive. The more of an opportunity a company has to create a positive experience for their customers, in my opinion, they more likely the customer will return to that company again. I have noticed this with my personal shopping habits and experiences. References Rayport, J., & Jaworski, B. (2004). Best Face Forward. Harvard Business Review, 82(12), 47-58. Retrieved from Business Source Complete database.

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Draw Bug 2D player Camera

    - by RedShft
    I have just implemented a 2D player camera for my game, everything works properly except the player on the screen jitters when it moves between tiles. What I mean by jitter, is that if the player is moving the camera updates the tileset to be drawn and if the player steps to the right, the camera snaps that way. The movement is not smooth. I'm guessing this is occurring because of how I implemented the function to calculate the current viewable area or how my draw function works. I'm not entirely sure how to fix this. This camera system was entirely of my own creation and a first attempt at that, so it's very possible this is not a great way of doing things. My camera class, pulls information from the current tileset and calculates the viewable area. Right now I am targettng a resolution of 800 by 600. So I try to fit the appropriate amount of tiles for that resolution. My camera class, after calculating the current viewable tileset relative to the players location, returns a slice of the original tileset to be drawn. This tileset slice is updated every frame according to the players position. This slice is then passed to the map class, which draws the tile on screen. //Map Draw Function //This draw function currently matches the GID of the tile to it's location on the //PNG file of the tileset and then draws this portion on the screen void Draw(SDL_Surface* background, int[] _tileSet) { enforce( tilesetImage != null, "Tileset is null!"); enforce( background != null, "BackGround is null!"); int i = 0; int j = 0; SDL_Rect DestR, SrcR; SrcR.x = 0; SrcR.y = 0; SrcR.h = 32; SrcR.w = 32; foreach(tile; _tileSet) { //This code is matching the current tiles ID to the tileset image SrcR.x = cast(short)(tileWidth * (tile >= 11 ? (tile - ((tile / 10) * 10) - 1) : tile - 1)); SrcR.y = cast(short)(tileHeight * (tile > 10 ? (tile / 10) : 0)); //Applying the tile to the surface SDL_BlitSurface( tilesetImage, &SrcR, background, &DestR ); //this keeps track of what column/row we are on i++; if ( i == mapWidth ) { i = 0; j++; } DestR.x = cast(short)(i * tileWidth); DestR.y = cast(short)(j * tileHeight); } } //Camera Class class Camera { private: //A rectangle representing the view area SDL_Rect viewArea; //In number of tiles int viewAreaWidth; int viewAreaHeight; //This is the x and y coordinate of the camera in MAP SPACE IN PIXELS vect2 cameraCoordinates; //The player location in map space IN PIXELS vect2 playerLocation; //This is the players location in screen space; vect2 playerScreenLoc; int playerTileCol; int playerTileRow; int cameraTileCol; int cameraTileRow; //The map is stored in a single array with the tile ids //this corresponds to the index of the starting and ending tile int cameraStartTile, cameraEndTile; //This is a slice of the current tile set int[] tileSetCopy; int mapWidth; int mapHeight; int tileWidth; int tileHeight; public: this() { this.viewAreaWidth = 25; this.viewAreaHeight = 19; this.cameraCoordinates = vect2(0, 0); this.playerLocation = vect2(0, 0); this.viewArea = SDL_Rect (0, 0, 0, 0); this.tileWidth = 32; this.tileHeight = 32; } void Init(vect2 playerPosition, ref int[] tileSet, int mapWidth, int mapHeight ) { playerLocation = playerPosition; this.mapWidth = mapWidth; this.mapHeight = mapHeight; CalculateCurrentCameraPosition( tileSet, playerPosition ); //writeln( "Tile Set Copy: ", tileSetCopy ); //writeln( "Orginal Tile Set: ", tileSet ); } void CalculateCurrentCameraPosition( ref int[] tileSet, vect2 playerPosition ) { playerLocation = playerPosition; playerTileCol = cast(int)((playerLocation.x / tileWidth) + 1); playerTileRow = cast(int)((playerLocation.y / tileHeight) + 1); //writeln( "Player Tile (Column, Row): ","(", playerTileCol, ", ", playerTileRow, ")"); cameraTileCol = playerTileCol - (viewAreaWidth / 2); cameraTileRow = playerTileRow - (viewAreaHeight / 2); CameraMapBoundsCheck(); //writeln( "Camera Tile Start (Column, Row): ","(", cameraTileCol, ", ", cameraTileRow, ")"); cameraStartTile = ( (cameraTileRow - 1) * mapWidth ) + cameraTileCol - 1; //writeln( "Camera Start Tile: ", cameraStartTile ); cameraEndTile = cameraStartTile + ( viewAreaWidth * viewAreaHeight ) * 2; //writeln( "Camera End Tile: ", cameraEndTile ); tileSetCopy = tileSet[cameraStartTile..cameraEndTile]; } vect2 CalculatePlayerScreenLocation() { cameraCoordinates.x = cast(float)(cameraTileCol * tileWidth); cameraCoordinates.y = cast(float)(cameraTileRow * tileHeight); playerScreenLoc = playerLocation - cameraCoordinates + vect2(32, 32);; //writeln( "Camera Coordinates: ", cameraCoordinates ); //writeln( "Player Location (Map Space): ", playerLocation ); //writeln( "Player Location (Screen Space): ", playerScreenLoc ); return playerScreenLoc; } void CameraMapBoundsCheck() { if( cameraTileCol < 1 ) cameraTileCol = 1; if( cameraTileRow < 1 ) cameraTileRow = 1; if( cameraTileCol + 24 > mapWidth ) cameraTileCol = mapWidth - 24; if( cameraTileRow + 19 > mapHeight ) cameraTileRow = mapHeight - 19; } ref int[] GetTileSet() { return tileSetCopy; } int GetViewWidth() { return viewAreaWidth; } }

    Read the article

  • Efficient way to render tile-based map in Java

    - by Lucius
    Some time ago I posted here because I was having some memory issues with a game I'm working on. That has been pretty much solved thanks to some suggestions here, so I decided to come back with another problem I'm having. Basically, I feel that too much of the CPU is being used when rendering the map. I have a Core i5-2500 processor and when running the game, the CPU usage is about 35% - and I can't accept that that's just how it has to be. This is how I'm going about rendering the map: I have the X and Y coordinates of the player, so I'm not drawing the whole map, just the visible portion of it; The number of visible tiles on screen varies according to the resolution chosen by the player (the CPU usage is 35% here when playing at a resolution of 1440x900); If the tile is "empty", I just skip drawing it (this didn't visibly lower the CPU usage, but reduced the drawing time in about 20ms); The map is composed of 5 layers - for more details; The tiles are 32x32 pixels; And just to be on the safe side, I'll post the code for drawing the game here, although it's as messy and unreadable as it can be T_T (I'll try to make it a little readable) private void drawGame(Graphics2D g2d){ //Width and Height of the visible portion of the map (not of the screen) int visionWidht = visibleCols * TILE_SIZE; int visionHeight = visibleRows * TILE_SIZE; //Since the map can be smaller than the screen, I center it just to be sure int xAdjust = (getWidth() - visionWidht) / 2; int yAdjust = (getHeight() - visionHeight) / 2; //This "deducedX" thing is to move the map a few pixels horizontally, since the player moves by pixels and not full tiles int playerDrawX = listOfCharacters.get(0).getX(); int deducedX = 0; if (listOfCharacters.get(0).currentCol() - visibleCols / 2 >= 0) { playerDrawX = visibleCols / 2 * TILE_SIZE; map_draw_col = listOfCharacters.get(0).currentCol() - visibleCols / 2; deducedX = listOfCharacters.get(0).getXCol(); } //"deducedY" is the same deal as "deducedX", but vertically int playerDrawY = listOfCharacters.get(0).getY(); int deducedY = 0; if (listOfCharacters.get(0).currentRow() - visibleRows / 2 >= 0) { playerDrawY = visibleRows / 2 * TILE_SIZE; map_draw_row = listOfCharacters.get(0).currentRow() - visibleRows / 2; deducedY = listOfCharacters.get(0).getYRow(); } int max_cols = visibleCols + map_draw_col; if (max_cols >= map.getCols()) { max_cols = map.getCols() - 1; deducedX = 0; map_draw_col = max_cols - visibleCols + 1; playerDrawX = listOfCharacters.get(0).getX() - map_draw_col * TILE_SIZE; } int max_rows = visibleRows + map_draw_row; if (max_rows >= map.getRows()) { max_rows = map.getRows() - 1; deducedY = 0; map_draw_row = max_rows - visibleRows + 1; playerDrawY = listOfCharacters.get(0).getY() - map_draw_row * TILE_SIZE; } //map_draw_row and map_draw_col representes the coordinate of the upper left tile on the screen //iterate through all the tiles on screen and draw them - this is what consumes most of the CPU for (int col = map_draw_col; col <= max_cols; col++) { for (int row = map_draw_row; row <= max_rows; row++) { Tile[] tiles = map.getTiles(col, row); for(int layer = 0; layer < tiles.length; layer++){ Tile currentTile = tiles[layer]; boolean shouldDraw = true; //I only draw the tile if it exists and is not empty (id=-1) if(currentTile != null && currentTile.getId() >= 0){ //The layers above 1 can be draw behing or infront of the player according to where it's standing if(layer > 1 && currentTile.getId() >= 0){ if(playerBehind(col, row, layer, listOfCharacters.get(0))){ behinds.get(0).add(new int[]{col, row}); //the tiles that are infront of the player wont be draw right now shouldDraw = false; } } if(shouldDraw){ g2d.drawImage( tiles[layer].getImage(), (col-map_draw_col)*TILE_SIZE - deducedX + xAdjust, (row-map_draw_row)*TILE_SIZE - deducedY + yAdjust, null); } } } } } } There's some more code in this method but nothing relevant to this question. Basically, the biggest problem is that I iterate over around 5000 tiles (in this specific resolution) 60 times each second. I thought about rendering the visible portion of the map once and storing it into a BufferedImage and when the player moved move the whole image the same amount but to the opposite side and then drawn the tiles that appeared on the screen, but if I do it like that, I wont be able to have animated tiles (at least I think). That being said, any suggestions?

    Read the article

  • MVVM load data during or after ViewModel construction?

    - by mkmurray
    My generic question is as the title states, is it best to load data during ViewModel construction or afterward through some Loaded event handling? I'm guessing the answer is after construction via some Loaded event handling, but I'm wondering how that is most cleanly coordinated between ViewModel and View? Here's more details about my situation and the particular problem I'm trying to solve: I am using the MVVM Light framework as well as Unity for DI. I have some nested Views, each bound to a corresponding ViewModel. The ViewModels are bound to each View's root control DataContext via the ViewModelLocator idea that Laurent Bugnion has put into MVVM Light. This allows for finding ViewModels via a static resource and for controlling the lifetime of ViewModels via a Dependency Injection framework, in this case Unity. It also allows for Expression Blend to see everything in regard to ViewModels and how to bind them. So anyway, I've got a parent View that has a ComboBox databound to an ObservableCollection in its ViewModel. The ComboBox's SelectedItem is also bound (two-way) to a property on the ViewModel. When the selection of the ComboBox changes, this is to trigger updates in other views and subviews. Currently I am accomplishing this via the Messaging system that is found in MVVM Light. This is all working great and as expected when you choose different items in the ComboBox. However, the ViewModel is getting its data during construction time via a series of initializing method calls. This seems to only be a problem if I want to control what the initial SelectedItem of the ComboBox is. Using MVVM Light's messaging system, I currently have it set up where the setter of the ViewModel's SelectedItem property is the one broadcasting the update and the other interested ViewModels register for the message in their constructors. It appears I am currently trying to set the SelectedItem via the ViewModel at construction time, which hasn't allowed sub-ViewModels to be constructed and register yet. What would be the cleanest way to coordinate the data load and initial setting of SelectedItem within the ViewModel? I really want to stick with putting as little in the View's code-behind as is reasonable. I think I just need a way for the ViewModel to know when stuff has Loaded and that it can then continue to load the data and finalize the setup phase. Thanks in advance for your responses.

    Read the article

  • How I can add JScroll bar to NavigableImagePanel which is an Image panel with an small navigation vi

    - by Sarah Kho
    Hi, I have the following NavigableImagePanel, it is under BSD license and I found it in the web. What I want to do with this panel is as follow: I want to add a JScrollPane to it in order to show images in their full size and let the users to re-center the image using the small navigation panel. Right now, the panel resize the images to fit them in the current panel size. I want it to load the image in its real size and let users to navigate to different parts of the image using the navigation panel. Source code for the panel: import java.awt.AWTEvent; import java.awt.BorderLayout; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GraphicsEnvironment; import java.awt.Image; import java.awt.Point; import java.awt.Rectangle; import java.awt.RenderingHints; import java.awt.Toolkit; import java.awt.event.ComponentAdapter; import java.awt.event.ComponentEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseMotionListener; import java.awt.event.MouseWheelEvent; import java.awt.event.MouseWheelListener; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Arrays; import javax.imageio.ImageIO; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.SwingUtilities; /** * @author pxt * */ public class NavigableImagePanel extends JPanel { /** * <p>Identifies a change to the zoom level.</p> */ public static final String ZOOM_LEVEL_CHANGED_PROPERTY = "zoomLevel"; /** * <p>Identifies a change to the zoom increment.</p> */ public static final String ZOOM_INCREMENT_CHANGED_PROPERTY = "zoomIncrement"; /** * <p>Identifies that the image in the panel has changed.</p> */ public static final String IMAGE_CHANGED_PROPERTY = "image"; private static final double SCREEN_NAV_IMAGE_FACTOR = 0.15; // 15% of panel's width private static final double NAV_IMAGE_FACTOR = 0.3; // 30% of panel's width private static final double HIGH_QUALITY_RENDERING_SCALE_THRESHOLD = 1.0; private static final Object INTERPOLATION_TYPE = RenderingHints.VALUE_INTERPOLATION_BILINEAR; private double zoomIncrement = 0.2; private double zoomFactor = 1.0 + zoomIncrement; private double navZoomFactor = 1.0 + zoomIncrement; private BufferedImage image; private BufferedImage navigationImage; private int navImageWidth; private int navImageHeight; private double initialScale = 0.0; private double scale = 0.0; private double navScale = 0.0; private int originX = 0; private int originY = 0; private Point mousePosition; private Dimension previousPanelSize; private boolean navigationImageEnabled = true; private boolean highQualityRenderingEnabled = true; private WheelZoomDevice wheelZoomDevice = null; private ButtonZoomDevice buttonZoomDevice = null; /** * <p>Defines zoom devices.</p> */ public static class ZoomDevice { /** * <p>Identifies that the panel does not implement zooming, * but the component using the panel does (programmatic zooming method).</p> */ public static final ZoomDevice NONE = new ZoomDevice("none"); /** * <p>Identifies the left and right mouse buttons as the zooming device.</p> */ public static final ZoomDevice MOUSE_BUTTON = new ZoomDevice("mouseButton"); /** * <p>Identifies the mouse scroll wheel as the zooming device.</p> */ public static final ZoomDevice MOUSE_WHEEL = new ZoomDevice("mouseWheel"); private String zoomDevice; private ZoomDevice(String zoomDevice) { this.zoomDevice = zoomDevice; } public String toString() { return zoomDevice; } } //This class is required for high precision image coordinates translation. private class Coords { public double x; public double y; public Coords(double x, double y) { this.x = x; this.y = y; } public int getIntX() { return (int)Math.round(x); } public int getIntY() { return (int)Math.round(y); } public String toString() { return "[Coords: x=" + x + ",y=" + y + "]"; } } private class WheelZoomDevice implements MouseWheelListener { public void mouseWheelMoved(MouseWheelEvent e) { Point p = e.getPoint(); boolean zoomIn = (e.getWheelRotation() < 0); if (isInNavigationImage(p)) { if (zoomIn) { navZoomFactor = 1.0 + zoomIncrement; } else { navZoomFactor = 1.0 - zoomIncrement; } zoomNavigationImage(); } else if (isInImage(p)) { if (zoomIn) { zoomFactor = 1.0 + zoomIncrement; } else { zoomFactor = 1.0 - zoomIncrement; } zoomImage(); } } } private class ButtonZoomDevice extends MouseAdapter { public void mouseClicked(MouseEvent e) { Point p = e.getPoint(); if (SwingUtilities.isRightMouseButton(e)) { if (isInNavigationImage(p)) { navZoomFactor = 1.0 - zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 - zoomIncrement; zoomImage(); } } else { if (isInNavigationImage(p)) { navZoomFactor = 1.0 + zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 + zoomIncrement; zoomImage(); } } } } /** * <p>Creates a new navigable image panel with no default image and * the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel() { setOpaque(false); addComponentListener(new ComponentAdapter() { public void componentResized(ComponentEvent e) { if (scale > 0.0) { if (isFullImageInPanel()) { centerImage(); } else if (isImageEdgeInPanel()) { scaleOrigin(); } if (isNavigationImageEnabled()) { createNavigationImage(); } repaint(); } previousPanelSize = getSize(); } }); addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e)) { if (isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); displayImageAt(p); } } } public void mouseClicked(MouseEvent e){ if (e.getClickCount() == 2) { resetImage(); } } }); addMouseMotionListener(new MouseMotionListener() { public void mouseDragged(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e) && !isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); moveImage(p); } } public void mouseMoved(MouseEvent e) { //we need the mouse position so that after zooming //that position of the image is maintained mousePosition = e.getPoint(); } }); setZoomDevice(ZoomDevice.MOUSE_WHEEL); } /** * <p>Creates a new navigable image panel with the specified image * and the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel(BufferedImage image) throws IOException { this(); setImage(image); } private void addWheelZoomDevice() { if (wheelZoomDevice == null) { wheelZoomDevice = new WheelZoomDevice(); addMouseWheelListener(wheelZoomDevice); } } private void addButtonZoomDevice() { if (buttonZoomDevice == null) { buttonZoomDevice = new ButtonZoomDevice(); addMouseListener(buttonZoomDevice); } } private void removeWheelZoomDevice() { if (wheelZoomDevice != null) { removeMouseWheelListener(wheelZoomDevice); wheelZoomDevice = null; } } private void removeButtonZoomDevice() { if (buttonZoomDevice != null) { removeMouseListener(buttonZoomDevice); buttonZoomDevice = null; } } /** * <p>Sets a new zoom device.</p> * * @param newZoomDevice specifies the type of a new zoom device. */ public void setZoomDevice(ZoomDevice newZoomDevice) { if (newZoomDevice == ZoomDevice.NONE) { removeWheelZoomDevice(); removeButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_BUTTON) { removeWheelZoomDevice(); addButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_WHEEL) { removeButtonZoomDevice(); addWheelZoomDevice(); } } /** * <p>Gets the current zoom device.</p> */ public ZoomDevice getZoomDevice() { if (buttonZoomDevice != null) { return ZoomDevice.MOUSE_BUTTON; } else if (wheelZoomDevice != null) { return ZoomDevice.MOUSE_WHEEL; } else { return ZoomDevice.NONE; } } //Called from paintComponent() when a new image is set. private void initializeParams() { double xScale = (double)getWidth() / image.getWidth(); double yScale = (double)getHeight() / image.getHeight(); initialScale = Math.min(xScale, yScale); scale = initialScale; //An image is initially centered centerImage(); if (isNavigationImageEnabled()) { createNavigationImage(); } } //Centers the current image in the panel. private void centerImage() { originX = (int)(getWidth() - getScreenImageWidth()) / 2; originY = (int)(getHeight() - getScreenImageHeight()) / 2; } //Creates and renders the navigation image in the upper let corner of the panel. private void createNavigationImage() { //We keep the original navigation image larger than initially //displayed to allow for zooming into it without pixellation effect. navImageWidth = (int)(getWidth() * NAV_IMAGE_FACTOR); navImageHeight = navImageWidth * image.getHeight() / image.getWidth(); int scrNavImageWidth = (int)(getWidth() * SCREEN_NAV_IMAGE_FACTOR); int scrNavImageHeight = scrNavImageWidth * image.getHeight() / image.getWidth(); navScale = (double)scrNavImageWidth / navImageWidth; navigationImage = new BufferedImage(navImageWidth, navImageHeight, image.getType()); Graphics g = navigationImage.getGraphics(); g.drawImage(image, 0, 0, navImageWidth, navImageHeight, null); } /** * <p>Sets an image for display in the panel.</p> * * @param image an image to be set in the panel */ public void setImage(BufferedImage image) { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>resets an image to the centre of the panel</p> * */ public void resetImage() { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>Tests whether an image uses the standard RGB color space.</p> */ public static boolean isStandardRGBImage(BufferedImage bImage) { return bImage.getColorModel().getColorSpace().isCS_sRGB(); } //Converts this panel's coordinates into the original image coordinates private Coords panelToImageCoords(Point p) { return new Coords((p.x - originX) / scale, (p.y - originY) / scale); } //Converts the original image coordinates into this panel's coordinates private Coords imageToPanelCoords(Coords p) { return new Coords((p.x * scale) + originX, (p.y * scale) + originY); } //Converts the navigation image coordinates into the zoomed image coordinates private Point navToZoomedImageCoords(Point p) { int x = p.x * getScreenImageWidth() / getScreenNavImageWidth(); int y = p.y * getScreenImageHeight() / getScreenNavImageHeight(); return new Point(x, y); } //The user clicked within the navigation image and this part of the image //is displayed in the panel. //The clicked point of the image is centered in the panel. private void displayImageAt(Point p) { Point scrImagePoint = navToZoomedImageCoords(p); originX = -(scrImagePoint.x - getWidth() / 2); originY = -(scrImagePoint.y - getHeight() / 2); repaint(); } //Tests whether a given point in the panel falls within the image boundaries. private boolean isInImage(Point p) { Coords coords = panelToImageCoords(p); int x = coords.getIntX(); int y = coords.getIntY(); return (x >= 0 && x < image.getWidth() && y >= 0 && y < image.getHeight()); } //Tests whether a given point in the panel falls within the navigation image //boundaries. private boolean isInNavigationImage(Point p) { return (isNavigationImageEnabled() && p.x < getScreenNavImageWidth() && p.y < getScreenNavImageHeight()); } //Used when the image is resized. private boolean isImageEdgeInPanel() { if (previousPanelSize == null) { return false; } return (originX > 0 && originX < previousPanelSize.width || originY > 0 && originY < previousPanelSize.height); } //Tests whether the image is displayed in its entirety in the panel. private boolean isFullImageInPanel() { return (originX >= 0 && (originX + getScreenImageWidth()) < getWidth() && originY >= 0 && (originY + getScreenImageHeight()) < getHeight()); } /** * <p>Indicates whether the high quality rendering feature is enabled.</p> * * @return true if high quality rendering is enabled, false otherwise. */ public boolean isHighQualityRenderingEnabled() { return highQualityRenderingEnabled; } /** * <p>Enables/disables high quality rendering.</p> * * @param enabled enables/disables high quality rendering */ public void setHighQualityRenderingEnabled(boolean enabled) { highQualityRenderingEnabled = enabled; } //High quality rendering kicks in when when a scaled image is larger //than the original image. In other words, //when image decimation stops and interpolation starts. private boolean isHighQualityRendering() { return (highQualityRenderingEnabled && scale > HIGH_QUALITY_RENDERING_SCALE_THRESHOLD); } /** * <p>Indicates whether navigation image is enabled.<p> * * @return true when navigation image is enabled, false otherwise. */ public boolean isNavigationImageEnabled() { return navigationImageEnabled; } /** * <p>Enables/disables navigation with the navigation image.</p> * <p>Navigation image should be disabled when custom, programmatic navigation * is implemented.</p> * * @param enabled true when navigation image is enabled, false otherwise. */ public void setNavigationImageEnabled(boolean enabled) { navigationImageEnabled = enabled; repaint(); } //Used when the panel is resized private void scaleOrigin() { originX = originX * getWidth() / previousPanelSize.width; originY = originY * getHeight() / previousPanelSize.height; repaint(); } //Converts the specified zoom level to scale. private double zoomToScale(double zoom) { return initialScale * zoom; } /** * <p>Gets the current zoom level.</p> * * @return the current zoom level */ public double getZoom() { return scale / initialScale; } /** * <p>Sets the zoom level used to display the image.</p> * <p>This method is used in programmatic zooming. The zooming center is * the point of the image closest to the center of the panel. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom) { Point zoomingCenter = new Point(getWidth() / 2, getHeight() / 2); setZoom(newZoom, zoomingCenter); } /** * <p>Sets the zoom level used to display the image, and the zooming center, * around which zooming is done.</p> * <p>This method is used in programmatic zooming. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom, Point zoomingCenter) { Coords imageP = panelToImageCoords(zoomingCenter); if (imageP.x < 0.0) { imageP.x = 0.0; } if (imageP.y < 0.0) { imageP.y = 0.0; } if (imageP.x >= image.getWidth()) { imageP.x = image.getWidth() - 1.0; } if (imageP.y >= image.getHeight()) { imageP.y = image.getHeight() - 1.0; } Coords correctedP = imageToPanelCoords(imageP); double oldZoom = getZoom(); scale = zoomToScale(newZoom); Coords panelP = imageToPanelCoords(imageP); originX += (correctedP.getIntX() - (int)panelP.x); originY += (correctedP.getIntY() - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } /** * <p>Gets the current zoom increment.</p> * * @return the current zoom increment */ public double getZoomIncrement() { return zoomIncrement; } /** * <p>Sets a new zoom increment value.</p> * * @param newZoomIncrement new zoom increment value */ public void setZoomIncrement(double newZoomIncrement) { double oldZoomIncrement = zoomIncrement; zoomIncrement = newZoomIncrement; firePropertyChange(ZOOM_INCREMENT_CHANGED_PROPERTY, new Double(oldZoomIncrement), new Double(zoomIncrement)); } //Zooms an image in the panel by repainting it at the new zoom level. //The current mouse position is the zooming center. private void zoomImage() { Coords imageP = panelToImageCoords(mousePosition); double oldZoom = getZoom(); scale *= zoomFactor; Coords panelP = imageToPanelCoords(imageP); originX += (mousePosition.x - (int)panelP.x); originY += (mousePosition.y - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } //Zooms the navigation image private void zoomNavigationImage() { navScale *= navZoomFactor; repaint(); } /** * <p>Gets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system.</p> * @return the point of the upper, left corner of the image in the panel's coordinates * system. */ public Point getImageOrigin() { return new Point(originX, originY); } /** * <p>Sets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system. After a new origin is set, the image is repainted. * This method is used for programmatic image navigation.</p>

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >