Search Results

Search found 10244 results on 410 pages for 'space complexity'.

Page 134/410 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Engineered to Inform, Inspire, Entertain

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Take note! Oracle OpenWorld keynote lineup announced  The lineup for the keynotes at this year's Oracle OpenWorld conference has just been announced.  Expert speakers will provide insights into industry trends, the latest technology developments and futures, as well as key strategies for achieving business efficiency and innovation. Critical business drivers such as engineered systems, cloud computing, customer experience, and business analytics and big data will be featured topics. Executive keynotes include: Oracle CEO Larry Ellison on "Hardware and Software, Engineered to Work Together: Why It's a Different Approach" and "The Oracle Cloud: Where Social is Built In" Oracle President Mark Hurd discussing "Shift Complexity" with SVP of Oracle Database Development Andrew Mendelsohn,  and "See More, Act Faster: Oracle Business Analytics" Oracle EVP of Product Development Thomas Kurian focusing on "The Oracle Cloud: Oracle's Cloud Platform and Applications Strategy" Oracle EVP of Systems John Fowler, Oracle Chief Corporate Architect Edward Screven, and Oracle SVP of Systems Technology Juan Loiaza on "Oracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized" For more information on speakers, topics, and schedule, go to the Oracle OpenWorld Keynotes page.

    Read the article

  • Which one is better offline method for large scale application

    - by Manish Pansiniya
    We've a big data management website used by several property. Some of our customers have downtime (they can't access net for an hour or two). We want our site to support offline data viewing and inventory management (typical data search and add/remove) and when the user goes online we can sync the changes back to our central database. Customers use several platforms like Windows, iOS, etc. We've been looking into several different options, here are the major choices - Develop offline web app supported in HTML5. Develop a 'fallback' mechanism and interact with data from the app cache as explained in here (http://www.htmlgoodies.com/html5/tutorials/introduction-to-offline-web- applications-using-) Develop a desktop based cross platform solution. I remember the old MONO which has been popular. Here's a post (What do you suggest for cross platform apps, including web cross-platform-apps-including-web) and another one (Technology choice for cross platform development (desktop and phone)? platform-development-desktop-and-phone?rq=1) I realize the the desktop solution might be hard to maintain and result in some compatibility issues and demand test environments. Can HTML5 handle moderate to high level complexity and data flow? Or would it be better to rely on a desktop based app for better scalability & performance?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards Announced

    - by Tanu Sood
    Guest Contributor: Margaret Harrist. Originally posted on Oracle NewsCentral Companies from around the world were honored Tuesday for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. A panel of judges scored each entry across multiple categories, including the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the architecture’s originality. In a general session just before the award presentation, Oracle Executive Vice President Hasan Rizvi highlighted a few of the winners’ original implementations, including Nike, Los Angeles Department of Water and Power, and Nintendo of America. Congratulations to the 2012 winners: Oracle Exalogic: Netshoes, Claro, UL, and Ingersoll Rand Oracle Cloud Application Foundation: Mazda Motor Corporation, HOTELBEDS Technology, Globalia, Nike, and Comcast Corporation Oracle SOA and Oracle BPM: NTT Docomo, Schneider National, Amadeus, and Motability Oracle WebCenter: News Limited, University of Louisville, China Mobile Jiangsu, Life Technologies Oracle Identity Management: Education Testing Service and Avea Oracle Data Integration: Raymond James and William Morrison Supermarkets Oracle Application Development Framework and Oracle Fusion Development: Qualcomm, Micros Systems, and Marfin Egnatia Bank Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics): INC Research, Experian, and Hologic

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • Dell xps 15z fan issue in ubuntu 12.04

    - by Paxinum
    I just updated to ubuntu 12.04 on my Dell laptop xps 15z. The trouble is that I hear a slight ticking sound every 3rd second, probably from a fan. This is a new issue in this ubuntu version. I use the recommended boot options for grub, i.e. acpi_backlight=vendor, but I do not use any acpi=off or acpi=noirq. Is there a way to fix this issue from ubuntu, by maybe controlling the fans somehow? EDIT: Notice, the sound goes away as the fan speeds up, (when doing calculations or such), so it is really a fan issue. EDIT2: I have located the issue: If I use conky 1.9, together with the command execpi for a python application, then the sound appears, and the noise syncs with the update interval for conky (NOT for the update interval for execpi!). The noise seems to be proportional to the complexity of the drawing that is needed. This is very strange, as this issue was not in the prev. version of conky I used. The solution was to increase the update interval for conky from 0.5 to 3, i.e. update every 3rd second instead of twice a second.

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • The Oracle Platform

    - by Naresh Persaud
    Today’s enterprises typically create identity management infrastructures using ad-hoc, multiple point solutions. Relying on point solutions introduces complexity and high cost of ownership leading many organizations to rethink this approach. In a recent worldwide study of 160 companies conducted by Aberdeen Research, there was a discernible shift in this trend as businesses are now looking to move away from the point solution approach from multiple vendors and adopt an integrated platform approach. By deploying a comprehensive identity and access management strategy using a single platform, companies are saving as much as 48% in IT costs, while reducing audit deficiencies by nearly 35%. According to Aberdeen's research, choosing an integrated suite or “platform” of solutions for Identity Management from a single vendor can have many advantages over choosing “point solutions” from multiple vendors. The Oracle Identity Management Platform is uniquely designed to offer several compelling benefits to our customers.  Shared Services: Instead of separate solutions for - Administration, Authentication, Authorization, Audit and so on–  Oracle Identity Management offers a set of share services that allows these services to be consumed by each component in the stack and by developers of new applications  Actionable Intelligence: The most compelling benefit of the Oracle platform is ” Actionable intelligence” which means if there is a compliance violation, the same platform can fix it. And If a user is logging in from an un-trusted device or we detect an attack and act proactively on that information. Suite Interoperability: With the oracle platform the components all connect and integrated with each other. So if an organization purchase the platform for provisioning and wants to manage access, then the same platform can offer access management which leads to cost savings. Extensible and Configurable: With point solutions – you typically get limited ability to extend the tool to address custom requirements. But with the Oracle platform all of the components have a common way to extend the UI and behavior Find out more about the Oracle Platform approach in this presentation. Platform approach-series-the oracleplatform-final View more PowerPoint from OracleIDM

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • Oracle Exalogic Customer Momentum @ OOW'12

    - by Sanjeev Sharma
    [Adapted from here]  At Oracle Open World 2012, i sat down with some of the Oracle Exalogic early adopters  to discuss the business benefits these businesses were realizing by embracing the engineered systems approach to data-center modernization and application consolidation. Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • If a library doesn't provide all my needs, how should I proceed?

    - by 9a3eedi
    I'm developing an application involving math and physics models, and I'd like to use a Math library for things like Matrices. I'm using C#, and so I was looking for some libraries and found Math.NET. I'm under the impression, from past experience, that for math, using a robust and industry-approved third party library is much better than writing your own code. It seems good for many purposes, but it does not provide support for Quaternions, which I need to use as a type. Also, I need some functions in Vector and Matrix that also aren't provided, such as rotation matrices and vector rotation functions, and calculating cross products. At the same time, it provides a lot of functions/classes that I simply do not need, which might mean a lot of unnecessary bloat and complexity. At this rate, should I even bother using the library? Should I write my own math library? Or is it a better idea to stick to the third party library and somehow wrap around it? Perhaps I should make a subclass of the Matrix and Vector type of the library? But isn't that considered bad style? I've also tried looking for other libraries but unfortunately I couldn't find anything suitable.

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • How do you KISS?

    - by Conor
    The KISS principle is a highly quoted design mantra. The aim of this principle is to stamp out unnecessary complexity on a project. This is a good thing, saving time, energy and money. It can lead to a relatively stress free implementation and a simple, elegant, maintainable end product. A lot of discussion on KISS provides mechanisms to simplify requirements, design and implementation. Things that spring to mind include: avoid scope creep; simple obvious design and code; minimal run-time dependencies; refactoring to maintain simplicity; etc. However there are a lot of implicit things that we do to KISS. Examples: small team sizes; minimal management layers; tidy desk; mastery of a single IDE; clear concise error messages; scripts to automate/encapsulate tasks; etc Why KISS practices do you apply? How have they been of benefit? I'm especially interested in non-obvious practices.

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • for an ajax heavy web application which would be better SOAP or REST?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app?

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Companies and Ships

    - by TechnicalWriting
    I have worked for small, medium, large, and extra large companies and they have something in common with ships. These metaphors have been used before, I know, but I will have a go at them.The small company is like a speed boat, exciting and fast, and can turn on a dime, literally. Captain and crew share a lot of the work. A speed boat has a short range and needs to refuel a lot. It has difficulty getting through bad weather. (Small companies often live quarter to quarter. By the way, if a larger company is living quarter to quarter, it is taking on water.)The medium company is is like a battleship. It can maneuver, has a longer range, and the crew is focused on its mission. Its main concern are the other battleships trying to blow it out of the water, but it can respond quickly. Bad weather can jostle it, but it can get through most storms.The large company is like an aircraft carrier; a floating city. It is well-provisioned and can carry a specialized load for a very long range. Because of its size and complexity, it has to be well-organized to be effective and most of its functions are specialized (with little to no functional cross-over). There are many divisions and layers between Captain and crew. It is not very maneuverable; it has to set its course well in advance and have a plan of action.The extra large company is like a cruise liner. It also has to be well-organized and changes in direction are often slow. Some of the people are hard at work behind the scenes to run the ship; others can be along for the ride. They sail the same routes over and over again (often happily) with the occasional cosmetic face-lift to the ship and entertainment. It should stay in warm, friendly waters and avoid risky speed through fields of ice bergs.I have enjoyed my career on the various Ships of Technical Writing, but I get the most of my juice from the battleship where I am closer to the campaign and my contributions have the greater impact on success.Mark Metcalfewww.linkedin.com/in/MarkMetcalfe

    Read the article

  • Being rocked...

    - by ZacHarlan
    After almost four and half years, I finally escaped from the world of telemarketing.     I'm now at a place that writes really good code, values testing, does routine code reviews, collaborates with each other so continuously and effectively somebody should make a documentary about it!   Today alone, I had two really smart and well respected developers go line by line through my code and show me how to make it better.  Seriously, people pay really good money for something like this and they don't get near the quality of feedback as I got!     +1 for me finally getting to a point in my career where i get to work with some of the best of the best in the software world!   I've been rocked by the fact that places like this actually exist.     I've been Rocked by the sheer size, complexity and simplicity of our website.     Most importantly I've been ROCKED by the fact that this many smart people check their egos at the door, gel together and look for ways to make software better than how they found it.  This is how to grow a business with tech... hire great people and watch them go!   Seriously, bravo.

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >