Search Results

Search found 11784 results on 472 pages for 'move assignment'.

Page 356/472 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Moved to SSD and now getting "the disk drive for / is not ready yet"

    - by dmt0
    I moved my Ubuntu 12.04 install over to an SSD drive. Copied all directories except for the ones most often written to - var, tmp, ... Reinstalled grub into SSD by booting with live CD and following the commands in this post: How to move Ubuntu to an SSD This seemed to work fine, because when I press "e" in grub menu, I see the expected UUIDs. But right after grub I get could not log bootup: Address already in use the disk drive for / is not ready yet or not present. If I skip, I get same for /tmp /run, and other dirs If I go into manual recovery and do mount -n -o remount,rw / it turns out that everything can mount no problem. Can't get my head around this one. My fstab seems right. grub is right. AHCI in bios is enabled. Why is this happening? What can I do to fix it? When I do drop into shell from this error and get to mount things manually, how do I get the OS to continue loading? Thank you guys for any ideas you can give me. Here's what my fstab looks like right now: # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 UUID=67fc8a7a-f1db-485c-88bd-e007c214244f / ext4 defaults,noatime,discard 0 1 # swap was on /dev/sda3 during installation UUID=6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6 none swap sw 0 0 UUID=7397729b-2125-4b1d-b5eb-28866898d773 /hdd ext4 errors=remount-ro 0 1 /hdd/home /home none bind 0 0 /hdd/run /run none bind 0 0 /hdd/tmp /tmp none bind 0 0 /hdd/var /var none bind 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 output from blkid: /dev/sda1: LABEL="System Reserved" UUID="EABC56C1BC568849" TYPE="ntfs" /dev/sda2: UUID="7CCC6124CC60D9C2" TYPE="ntfs" /dev/sda3: UUID="6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6" TYPE="swap" /dev/sda5: UUID="7397729b-2125-4b1d-b5eb-28866898d773" TYPE="ext4" /dev/sdb1: UUID="67fc8a7a-f1db-485c-88bd-e007c214244f" TYPE="ext4" relevant from fdisk -l: Device Boot Start End Blocks Id System /dev/sdb1 2048 115345407 57671680 83 Linux

    Read the article

  • What are the differences between abstract classes, interfaces, and when to use them

    - by user66662
    Recently I have started to wrap my head around OOP, and I am now to the point where the more I read about the differences between Abstract classes and Interfaces the more confused I become. So far, neither can be instantiated. Interfaces are more or less structural blueprints that determine the skeleton and abstracts are different by being able to partially develop code. I would like to learn more about these through my specific situation. Here is a link to my first question if you would like a little more background information: What is a good design model for my new class? Here are two classes I created: class Ad { $title; $description $price; function get_data($website){ } function validate_price(){ } } class calendar_event { $title; $description $start_date; function get_data($website){ //guts } function validate_dates(){ //guts } } So, as you can see these classes are almost identical. Not shown here, but there are other functions, like get_zip(), save_to_database() that are common across my classes. I have also added other classes Cars and Pets which have all the common methods and of course properties specific to those objects (mileage, weight, for example). Now I have violated the DRY principle and I am managing and changing the same code across multiple files. I intend on having more classes like boats, horses, or whatever. So is this where I would use an interface or abstract class? From what I understand about abstract classes I would use a super class as a template with all of the common elements built into the abstract class, and then add only the items specifically needed in future classes. For example: abstract class content { $title; $description function get_data($website){ } function common_function2() { } function common_function3() { } } class calendar_event extends content { $start_date; function validate_dates(){ } } Or would I use an interface and, because these are so similar, create a structure that each of the subclasses are forced to use for integrity reasons, and leave it up to the end developer who fleshes out that class to be responsible for each of the details of even the common functions. my thinking there is that some 'common' functions may need to be tweaked in the future for the needs of their specific class. Despite all that above, if you believe I am misunderstanding the what and why of abstracts and interfaces altogether, by all means let a valid answer to be stop thinking in this direction and suggest the proper way to move forward! Thanks!

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Chrome trims the last <li> element in a row [closed]

    - by Paul
    Ok guys, I give up. Here's the code I'm struggling with: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <title>Blah</title> <style type="text/css"> #container { margin: 0 auto; width: 350px; border: 1px solid #ccc; } ul { list-style-type: none; margin: 0; padding: 0; text-align: center; } ul li { display: inline; padding: 5px; margin: 0 1px; background-color: lime; line-height: 2em; /* border: 1px solid red; */ } </style> </head> <body> <div id="container"> <ul> <li>Element A</li> <li>Element B</li> <li>Element C</li> <li>Element D</li> <li>Element E</li> <li>Element F</li> </ul> </div> </body> </html> Why the heck does Chrome trim the right side of "Element D" (even though there is enough space to display whole item), while Firefox and even Internet Explorer render this code properly? It becomes more visible when we apply the commented border. In other words, is there a way to tell the browser that I want every single <li> element to be autonomic, and thus to move it to the next row if it doesn't fit entirely in the previous one? Can't wait to see the solution, thanks in advance.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Alternatives to multiple inheritance for my architecture (NPCs in a Realtime Strategy game)?

    - by Brettetete
    Coding isn't that hard actually. The hard part is to write code that makes sense, is readable and understandable. So I want to get a better developer and create some solid architecture. So I want to do create an architecture for NPCs in a video-game. It is a Realtime Strategy game like Starcraft, Age of Empires, Command & Conquers, etc etc.. So I'll have different kinds of NPCs. A NPC can have one to many abilities (methods) of these: Build(), Farm() and Attack(). Examples: Worker can Build() and Farm() Warrior can Attack() Citizen can Build(), Farm() and Attack() Fisherman can Farm() and Attack() I hope everything is clear so far. So now I do have my NPC Types and their abilities. But lets come to the technical / programmatical aspect. What would be a good programmatic architecture for my different kinds of NPCs? Okay I could have a base class. Actually I think this is a good way to stick with the DRY principle. So I can have methods like WalkTo(x,y) in my base class since every NPC will be able to move. But now lets come to the real problem. Where do I implement my abilities? (remember: Build(), Farm() and Attack()) Since the abilities will consists of the same logic it would be annoying / break DRY principle to implement them for each NPC (Worker,Warrior, ..). Okay I could implement the abilities within the base class. This would require some kind of logic that verifies if a NPC can use ability X. IsBuilder, CanBuild, .. I think it is clear what I want to express. But I don't feel very well with this idea. This sounds like a bloated base class with too much functionality. I do use C# as programming language. So multiple inheritance isn't an opinion here. Means: Having extra base classes like Fisherman : Farmer, Attacker won't work.

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Rotate camera around player and set new forward directions

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. When I look at the back of the player and press forward, player goes forward. Then I rotate 360 around the player and "forward direction" is tilted for 90 degrees. So every 360 turn there is 90 degrees of direction change. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". I have Player object with Camera as child object. Camera object has Camera script. Inside Camera script there are Player and Camera classes. Player object itself, has Input Controller. Also I'm making this script for joystick/ controller primarily. My camera script so far: using UnityEngine; using System.Collections; public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 10, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • How to set the initial component focus

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, you use the af:document tag's initialFocusId to define the initial component focus. For this, specify the id property value of the component that you want to put the initial focus on. Identifiers are relative to the component, and must account for NamingContainers. You can use a single colon to start the search from the root, or multiple colons to move up through the NamingContainers - "::" will pop out of the component's naming container and begin the search from there, ":::" will pop out of two naming containers and begin the search from there. Alternatively you can add the naming container IDs as a prefix to the component Id, e.g. nc1:nc2:comp1. http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_document.html To set the initial focus to a component located in a page fragment that is exposed through an ADF region, keep in mind that ADF Faces regions - af:region - is a naming container too. To address an input text field with the id "it1" in an ADF region exposed by an af:region tag with the id r1, you use the following reference in af:document: <af:document id="d1" initialFocusId="r1:0:it1"> Note the "0" index in the client Id. Also, make sure the input text component has its clientComponent property set to true as otherwise no client component exist to put focus on.

    Read the article

  • How to apply effects that occur (or change) over time to characters in a game?

    - by Joshua Harris
    So assume that I have a system that applies Effects to Characters like so: public class Character { private Collection<Effect> _effects; public void AddEffect (Effect e) { e.ApplyTo(this); _effects.Add(e); } public void RemoveEffect (Effect e) { e.RemoveFrom(this); _effects.Remove(e); } } public interface Effect { public void ApplyTo (Character character); public void RemoveFrom (Character character); } Example Effect: Armor Buff for 5 seconds. void someFunction() { // Do Stuff ... Timer armorTimer = new Timer(5 seconds); ArmorBuff armorbuff = new ArmorBuff(); character.AddEffect(armorBuff); armorTimer.Start(); // Do more stuff ... } // Some where else in code public void ArmorTimer_Complete() { character.RemoveEffect(armorBuff); } public class ArmorBuff implements Effect { public void applyTo(Character character) { character.changeArmor(20); } public void removeFrom(Character character) { character.changeArmor(-20); } } Ok, so this example would buff the Characters armor for 5 seconds. Easy to get working. But what about effects that change over the duration of the effect being applied. Two examples come to mind: Damage Over Time: 200 damage every second for 3 seconds. I could mimic this by applying an Effect that lasts for 1 second and has a counter set to 3, then when it is removed it could deal 200 damage, clone itself, decrement the counter of the clone, and apply the clone to the character. If it repeats this until the counter is 0, then you got a damage over time ability. I'm not a huge fan of this approach, but it does describe the behavior exactly. Degenerating Speed Boost: Gain a speed boost that degrades over 3 seconds until you return to your normal speed. This is a bit harder. I can basically do the same thing as above except having timers set to some portion of a second, such that they occur fast enough to give the appearance of degenerating smoothly over time (even though they are really just stepping down incrementally). I feel like you could get away with only 12 steps over a second (maybe less, I would have to test it and see), but this doesn't seem very elegant to me. The only other way to implement this effect would be to change the system so that the Character checks the _effects collection for effects that alter any of the properties any time that they are being used. I could handle this in functions like getCurrentSpeed() and getCurrentArmor(), but you can imagine how much of a hassle it would be to have that kind of overhead every time you want to do a calculation with movement speed (which would be every time you move your character). Is there a better way to deal with these kinds of effects or events?

    Read the article

  • Setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Oops, I left my kernel zone configuration behind!

    - by mgerdts
    Most people use boot environments to move in one direction.  A system starts with an initial installation and from time to time new boot environments are created - typically as a result of pkg update - and then the new BE is booted.  This post is of little interest to those people as no hackery is needed.  This post is about some mild hackery. During development, I commonly test different scenarios across multiple boot environments.  Many times, those tests aren't related to the act of configuring or installing zone and I so it's kinda handy to avoid the effort involved of zone configuration and installation.  A somewhat common order of operations is like the following: # beadm create -e golden -a test1 # reboot Once the system is running in the test1 BE, I install a kernel zone. # zonecfg -z a178 create -t SYSsolaris-kz # zoneadm -z a178 install Time passes, and I do all kinds of stuff to the test1 boot environment and want to test other scenarios in a clean boot environment.  So then I create a new one from my golden BE and reboot into it. # beadm create -e golden -a test2 # reboot Since the test2 BE was created from the golden BE, it doesn't have the configuration for the kernel zone that I configured and installed.  Getting that zone over to the test2 BE is pretty easy.  My test1 BE is really known as s11fixes-2. root@vzl-212:~# beadm mount s11fixes-2 /mnt root@vzl-212:~# zonecfg -R /mnt -z a178 export | zonecfg -z a178 -f - root@vzl-212:~# beadm unmount s11fixes-2 root@vzl-212:~# zoneadm -z a178 attach root@vzl-212:~# zoneadm -z a178 boot On the face of it, it would seem as though it would have been easier to just use zonecfg -z a178 create -t SYSolaris-kz within the test2 BE to get the new configuration over.  That would almost work, but it would have left behind the encryption key required for access to host data and any suspend image.  See solaris-kz(5) for more info on host data.  I very commonly have more complex configurations that contain many storage URIs and non-default resource controls.  Retyping them would be rather tedious.

    Read the article

  • MOSSLover Lives On&hellip;

    - by MOSSLover
    A while back, maybe 6 months, I got some bad news about 2010.  Microsoft was removing Office from the MOSS equivalent of 2010, so basically my alias would be obsolete the second 2010 caught on in the community.  I thought about it for some time.  I had some discussions with friends in the community.  I even noticed that the MOSSMan changed his twitter id.  I started my blog around a WSS 3.0 project when I worked for LRS in there St. Louis Office in February/March 2007.  So I think it’s fitting to keep the name, because my community involvement centers around 2007.  My first ever speaking ordeal was at the Kansas City Office Geeks meeting in November of 2007 on Disaster Recovery where about three people attended.  The first user group meeting I ever attended was around the month of June 2007 at the KC .Net User Group about two weeks after my braces were installed.  It’s definitely fitting to say that 2007 paved the way for everything that happened in the past 2/2 1/2 years.  If anyone asks what MOSSLover means I added a description on twitter and I also added my name.  I added my name for other reasons, because I’m sick of people thinking I am the guy in the photo.  Also, I’d like people to recognize me for who I am.  Everyone should expect less of the hat in the upcoming year and more of my hair.  I’ve taken a vow to wear the hat less and less this year.  I am sick of buying hats, plus I want to move forward to gain more self confidence.  The hat does not really help.  I will still wear a t-shirt and jeans in most of my presentations.  That is who I am and it will not change any time soon.  If you expect to see me in a skirt good luck with that as it won’t be happening unless I am forced at gun point.  I hope you guys have a good weekend.  Later all… Technorati Tags: MOSSLover,Cardinal's Hat,Becky Isserman

    Read the article

  • My 2011 Professional Development Goals

    - by kerry
    I thought it might be a good idea to post some professional goals for 2011.  Hopefully, I can look at this list at the end of the year and have accomplished most of them. Release an Android app to the marketplace – I figured I would put this first because I have one that I have been working on for a while and it is about ready.  Along with this, I would like to start another one and continue to develop my Android skills. Contribute free software to the community – Again, I have an SMF plugin that will fill this requirement nicely.  Just need to give it some polish and release it.  That’s not all, I would like to add a few more libraries on github, or possibly contribute to an open source project. Regularly attend a user group meetings outside of Java – A great way to meet people and learn new things. Obtain the Oracle Certified Web Developer Certification – I got the SCJP a few years ago and would like to obtain another one.  One step closer to Certified Enterprise Architect. Learn scala – As a language geek, I like to stick to the Pragmatic Programmer’s ‘learn a new language every year’ rule (last year was Ruby).  Scala presents some new concepts all wrapped in a JVM-based OOP language.  Time to dig in. Write an app using JSF – New JEE 6 features are pretty slick.  I want to really leverage them in an app. Present at a user group meeting – Last but not least, I would like to improve my public speaking and skills in presenting.  Also, is a great reason to dig in to some latest and greatest tech. Use git more, and more effectively – Trying to move all my personal projects from Subversion to Git. That’s it.  A little daunting, but I am confident I can at least touch on most of these and it’s a great roadmap to my professional development.

    Read the article

  • I need help with specific types of movement.

    - by IronGiraffe
    I'm adding movable crates to my game and I need some help with my movement code. The way I've set up my movement code the crate's X and Y are moved according to a vector2 unless it hits a wall. Here's the movement code: if (frameCount % (delay / 2) == 0) { for (int i = 0; i < Math.Abs(cSpeed.X); i++) { if (!Level.PlayerHit(new Rectangle(crateBounds.X + (Math.Sign(cSpeed.X) * 32), crateBounds.Y, crateBounds.Width, crateBounds.Height))) { if (!Level.CollideTiles(crateBounds.X + (Math.Sign(cSpeed.X) * 32), crateBounds.Y, crateBounds.Width, crateBounds.Height)) { if (cSpeed.X != 0) { crateBounds.X += Math.Sign(cSpeed.X); } else { Equalize(2); } } else { cSpeed.X = 0f; } } else { if (!Level.CollideTiles(crateBounds.X - (Math.Sign(cSpeed.X) * 32), crateBounds.Y, crateBounds.Width, crateBounds.Height)) { if (cSpeed.X != 0) { crateBounds.X -= Math.Sign(cSpeed.X); } else { Equalize(2); } } else { cSpeed.X = 0f; } } } for (int i = 0; i < Math.Abs(cSpeed.Y); i++) { if (!Level.PlayerHit(new Rectangle(crateBounds.X, crateBounds.Y + Math.Sign(cSpeed.Y), crateBounds.Width, crateBounds.Height))) { if (!Level.CollideTiles(crateBounds.X, crateBounds.Y + Math.Sign(cSpeed.Y), crateBounds.Width, crateBounds.Height)) { crateBounds.Y += Math.Sign(cSpeed.Y); } else { cSpeed.Y = 0f; } } else { if (!Level.CollideTiles(crateBounds.X, crateBounds.Y - Math.Sign(cSpeed.Y), crateBounds.Width, crateBounds.Height)) { crateBounds.Y -= Math.Sign(cSpeed.Y); } else { cSpeed.Y = 0f; } } } } The frameCount and delay variables just slow down the movement somewhat. Anyway, I've added a tool to my game that acts as a gravity well (drawing objects into it's center; the closer they get to the center the faster they go) and what I'm trying to do is make it so that the crate will bounce off the player (and vice versa) when they collide. Thus far, my code only keeps the crate and player from getting stuck inside each other (the player can still get stuck under the crate, but I'll fix that later.) So what I'd like to know is how I can best make the crate bounce off the player. The other movement problem I'm having is related to another tool which allows the player to pick up crates and move around with them. The problem is that the crate's movement while being carried isn't tied to the movement script (it moves the crate itself, instead of adding to speed), which makes the crate go through walls and such. I know what I have to do: make the crate's speed match the player's speed while the player is holding it, but I need the crate to snap to a certain position (just in front of the player) when the player grabs it and it needs to switch to another position (just in front of the player on the other side) when they player faces the other direction while holding it. What would be the best way to make it switch places while keeping all the movement tied to the speed vector?

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • In Technology, Ignorance is NOT Bliss

    - by Tanu Sood
    Author: Debra Lilley, ACE Director, UK Proof I’m not technical -  I’ve just finished a Latin America tour with OTN and a funny thing happened that I want to share with you; because it is quite a good analogy for how many of us use technology today and you know how I love analogies. In Costa Rica we had a really long journey up through the mountains to where our conference was to be. The road was windy and narrow and once it got dark there was no scenery to see, boredom set in. At one stage I looked at my watch to see the time, but in the dark I couldn’t make it out, so I thought I would be clever and use the torch in my smartphone! Even though as soon as I switched on the phone it showed the time, I ignored it and used the torch to read my watch. That’s us when we pay maintenance on software, ask for enhancements, and either chose not to upgrade or as I have seen so many times, upgrade but don’t use the new features. I know there are always other factors not least the upgrade costs themselves but in the later releases of all the Oracle family of applications Oracle have done a lot to make the interoperability of them with Oracle Fusion Middleware more successful and in many cases for the first time. My heritage is Oracle E Business Suite (EBS) and the availability of Oracle Weblogic for EBS is fantastic for an Oracle powered organisation that can move away from supporting multiple flavours of application server. The same release made available  - the no downtime patching that Oracle Database 11g introduced with Edition Based Redefinition. I am not saying you must use these features but you must be aware of what each release of your application brings and make a business based decision as to whether it is for you or not. I like to have a simple spreadsheet of features with no-value, nice-to-have, must-have ratings, but make the spreadsheet cumulative so that when you do upgrade you have all the features listed you previously didn’t take up. That way you can avoid the ‘using your phone to read your watch’ scenario. About the Author: Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts  

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >