Search Results

Search found 10244 results on 410 pages for 'space complexity'.

Page 132/410 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Oracle buys Secerno

    - by Paulo Folgado
    Adds Heterogeneous Database Firewall to Oracle's Industry-leading Database Security SolutionsRedwood Shores, CA - May 20, 2010News FactsOracle has agreed to acquire Secerno, a provider of database firewall solutions for Oracle and non-Oracle databases.Organizations require a comprehensive security solution which includes database firewall functionality to prevent sophisticated attacks from reaching databases.Secerno's solution adds a critical defensive layer of security around databases, which blocks unauthorized activity in real-time.Secerno's products are expected to augment Oracle's industry-leading portfolio of database security solutions, including Oracle Advanced Security, Oracle Database Vault and Oracle Audit Vault to further ensure data privacy, protect against threats, and enable regulatory compliance.The combination of Oracle and Secerno underscores Oracle's commitment to provide customers with the most comprehensive and advanced security offering that helps reduce the costs and complexity of securing their information throughout the enterprise.The transaction is expected to close before end of June 2010. Financial details of the transaction were not disclosed.Supporting Quote:"The Secerno acquisition is in direct response to increasing customer challenges around mitigating database security risk," said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. "Secerno's database firewall product acts as a first line of defense against external threats and unauthorized internal access with a protective perimeter around Oracle and non-Oracle databases. Together, Oracle's complete set of database security solutions and Secerno's technology will provide customers with the ability to safeguard their critical business information.""As a provider of database firewall solutions that help customers safeguard their enterprise databases, Secerno is a natural addition to Oracle's industry-leading database security solutions," said Steve Hurn, CEO Secerno. "Secerno has been providing enterprises and their IT Security departments strong assurance that their databases are protected from attacks and breaches. We are excited to bring Secerno's domain expertise to Oracle, and ensure continuity and success for our current customers, partners and prospects."Support Resources:About Oracle and SecernoGeneral PresentationFAQCustomer LetterPartner Letter

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • Defense Manpower Data Center Wins Award for Excellence in the Workforce

    - by Peggy Chen
    The Defense Manpower Data Center milConnect website recently won the 2012 Excellence.gov Award for Excellence in the Workforce. Defense Manpower Data Center milConnect is a centralized, online resource that provides military service members (active and retired) and their families (over 42 million in total) quick access to their profile, health care enrollments, benefits, and other military-related topics. An easy to use, safe and secure website, milConnect also provides service members with convenient access their personnel and service-related information. The self-service website allow users to quickly and easily find and, where applicable, update their information in the Defense Eligibility Reporting System (DEERS) and milConnect transmits information to and from one reliable source safely and securely.  The Defense Manpower Data Center (DMDC) maintains the largest, most comprehensive central repository of personnel, manpower, casualty, pay, entitlement, personnel security, person identity and attributes, survey, testing, training, and financial data in the Department of Defense (DoD).  This is one of the largest systems of record in the world. milConnect had the challenge of modernizing the user experience for over 42 million users. With records in over 22 applications and 25 interfaces in hundreds of existing systems, milConnect needed to reduce the complexity of multiple authentication sources as well as consolidating access to existing systems with sensitive information. It accomplished this using a service-orientated architecture, enterprise security and access and identity management for self-service access on a massive scale. By providing 24x7x365 secure access and handling over 5 million transactions daily, not only has milConnect, built on Oracle WebCenter, streamlined and improved the customer experience for military personnel and families. it has also done so while cutting costs, allowing self-service access, and promoting electronic government. Congrats to Defense Manpower Data Center and milConnect! 

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • EVENT RECAP: Oracle Day & Product Fair - Atlanta

    - by cwarticki
    Are you attending any of the Oracle Days and other Events? They are fantastic!  Keep track of the Oracle Events by following @OracleEvents on Twitter.  Also, stay in the know by subscribing to one of the several Oracle Newsletters. Those will also keep you posted of upcoming in-person and webcast events. From the Oracle Events website, simply navigate to your geography and refine your options to locate what interests you. You can also perform keyword searches. Yesterday, I had the opportunity to participate in the Oracle Day & Product Fair in Atlanta, Georgia.  Thanks to those who stopped by to ask your support questions and watch me demo My Oracle Support features and best practices. It was a fantastic turnout.  The Buckhead Theatre provided served as an excellent venue. It was standing room only for the double keynotes on topics of interest to our customers: Navigating Complexity by Simplifying I.T., and Oracle Exadata X3-Transforming Data Management. The Product Fair was staffed by many Oracle professionals and our Partners too.  It was great meeting new people like the team representing OAUG.  Many thanks to our sponsors: BIAS, Cloudera, Intel and TekStream Solutions.Come attend one of the many Oracle Days & other events planned for you. I'll be attending the one Ft. Lauderdale, FL on November 16th. See you there. -Chris WartickiGlobal Customer Management

    Read the article

  • Unlock More Value: Oracle Platinum Services at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    In a bold move to provide even more value to customers who adopt the extreme performance of Oracle Exalogic Elastic Cloud, Oracle Exadata, and Oracle SPARC SuperCluster, Oracle recently launched a set of enhanced services that help IT managers decrease the cost and complexity of supporting their IT environments: Oracle Platinum Services. Learn more by attending the Oracle Platinum Services: Unlock More Value with Advanced Support session at Oracle OpenWorld. In this session, Oracle shares how to achieve maximum performance and lower total cost of ownership through certified configurations for Oracle engineered systems and Oracle Platinum Services. Hear about the industry-leading Oracle Platinum Services offering and tools already used by Oracle customers, including remote fault monitoring, faster response times and patching services.Vincent Biddlecombe, chief technology officer of Transplace, a third-party logistics provider, is seeing results already. He says “The Platinum Services offering has been a great addition to Oracle Premier Support. This level of support is unique in my experience. We saw results very quickly. Our experience has exceeded my expectations.” The patching services have enabled Transplace to stay up to date on the latest improvements.  According to Biddlecombe, “We've gone from being eight patches behind to completely up to date, and I'm extremely happy.”  Visit us on Monday, October 1 at 12:15 p.m. and become familiar with industry-leading Oracle Platinum Services. For more information on Oracle Customer Support Services sessions and events, go to Oracle Customer Support Services.

    Read the article

  • Webinar: NoSQL - Data Center Centric Application Enablement

    - by Charles Lamb
    NoSQL - Data Center Centric Application Enablement AUGUST 6 WEBINAR About the Webinar The growth of Datacenter infrastructure is trending out of bounds, along with the pace in user activity and data generation in this digital era. However, the nature of the typical application deployment within the data center is changing to accommodate new business needs. Those changes introduce complexities in application deployment architecture and design, which cascade into requirements for a new generation of database technology (NoSQL) destined to ease that complexity. This webcast will discuss the modern data centers data centric application, the complexities that must be dealt with and common architectures found to describe and prescribe new data center aware services. Well look at the practical issues in implementation and overview current state of art in NoSQL database technology solving the problems of data center awareness in application development. REGISTER NOW>> MORE INFORMATION >> NOTE! All attendees will be entered to win a guest pass to the NoSQL Now! 2013 Conference & Expo. About the Speaker Robert Greene, Oracle NoSQL Product Management Robert GreeneRobert Greene is a principle product manager / strategist for Oracle’s NoSQL Database technology. Prior to Oracle he was the V.P. Technology for a NoSQL Database company, Versant Corporation, where he set the strategy for alignment with Big Data technology trends resulting in the acquisition of the company by Actian Corp in 2012. Robert has been an active member of both commercial and open source initiatives in the NoSQL and Object Relational Mapping spaces for the past 18 years, developing software, leading project teams, authoring articles and presenting at major conferences on these topics. In his previous life, Robert was an electronic engineer developing first generation wireless, spread spectrum based security systems.

    Read the article

  • Ajax application: using SOAP vs REST ?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app? EDIT: While the wsdl is in xml, the data I'm transferring back and forth is actually in JSON. It just appears more natural to use wsdl/soap here due to the nature of the app. The verbs GET and POST may not be enough. I may want to say something like: processQueue, or executeTimer. This is why my conclusion has been wsdl/soap would be good for bridging a complex layer between two applications (client and server) whereas REST would be better (due to its simplicity) for allowing many developer-users to consume resources programmatically. So you could say the choice falls along two lines Will the app be verb-oriented (completing tasks: use soap) or noun-oriented (consuming resources: use REST) Will the api be consumed by few developers or many developers (REST is strong for many developers)? Since such an ajax heavy app would potentially use many verbs and would only be used by the client developer it appears soap/wsdl would be the best fit.

    Read the article

  • links for 2011-02-22

    - by Bob Rhubart
    Eleven BI trends for 2011 | ITWeb Business Intelligence (tags: ping.fm) The Buttso Blathers: WebLogic Schema Files Buttso shares a link. (tags: orale weblogic) Cloud Computing & Enterprise Architecture | Open Group Blog "On the first look, it may seem like Enterprise Architecture is irrelevant in a company if your complete IT is running on Cloud Computing, SaaS and outsourcing/offshoring. I was of the same opinion last year. However, it is not the case. In fact, the complexity is going to get multiplied." (tags: opengroup cloud enterprisearchitecture) James Taylor: Change Logging Level for SOA 11g James says: "I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. (tags: oracle middleware soa) David Linthicum: The Truth behind Standards, SOA, and Cloud Computing "Most of the standards we've worked on in the world of SOA over the past several years are applicable to the world of cloud computing. Cloud computing is simply a change in platform, and the existing architectural standards we leverage should transfer nicely to the cloud computing space." - David Linthicum (tags: enterprisearchitecture soa cloud) C. Martin Harris, MD: HIMSS11 Update from the Chairman "We cannot allow ourselves to focus exclusively on near term goals. Our real goal is a technology-driven transformation of healthcare that will never stop. A true transformation is a process of lessons learned and applied, that continually open broad new horizons of opportunity." - C. Martin Harris, MD (tags: enterprisearchitecture modernization)

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • What is the value in hiding the details through abstractions? Isn't there value in transparency?

    - by user606723
    Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable?

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards for Oracle Exalogic

    - by Sanjeev Sharma
    Companies from around the world were honored for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. 4 awards were given out to customers who demonstrated innovative application of Oracle Exalogic for their mission-critical applications.Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • Character progression through leveling, skills or items?

    - by Anton
    I'm working on a design for an RPG game, and I'm having some doubts about the skill and level system. I'm going for a more casual, explorative gaming experience and so thought about lowering the game complexity by simplifying character progression. But I'm having trouble deciding between the following: Progression through leveling, no complex skill progression, leveling increases base stats. Progression through skills, no leveling or base stat changes, skills progress through usage. Progression through items, more focus on stat-changing items, items confer skills, no leveling. However, I'm uncertain what the effects on gameplay might be in the end. So, my question is this: What would be the effects of choosing one of the above alternatives over the others? (Particularly with regards to the style and feel of the gameplay) My take on it is that the first sacrifices more frequent rewards and customization in favor of a simpler gameplay; the second sacrifices explicit customization and player control in favor of more frequent rewards and a somewhat simpler gameplay; while the third sacrifices inventory simplicity and a player metric in favor of player control, customization and progression simplicity. Addendum: I'm not really limiting myself to the above three, they are just the ones I liked most and am primarily interested in.

    Read the article

  • Which data structure should I use for dynamically generated platforms?

    - by Joey Green
    I'm creating a platform type of game with various types of platforms. Platforms that move, shake, rotate, etc. Multiple types and multiple of each type can be on the screen at once. The platforms will be procedural generated. I'm trying to figure out which of the following would be a better platform system: Pre-allocate all platforms when the scene loads, storing each platform type into different platform type arrays( i.e. regPlatformArray ), and just getting one when I need one. The other option is to allocate and load what I need when my code needs it. The problem with 1 is keeping up with the indices that are in use on screen and which aren't. The problem with 2 is I'm having a hard time wrapping my head around how I would store these platforms so that I can call the update/draw methods on them and managing that data structure that holds them. The data structure would constantly be growing and shrinking. It seems there could be too much complexity. I'm using the cocos2d iPhone game engine. Anyways, which option would be best or is there a better option?

    Read the article

  • Guest Blog: Secure your applications based on your business model, not your application architecture, by Yaldah Hakim

    - by Darin Pendergraft
    Today’s businesses are looking for new ways to engage their customers, embrace mobile applications, while staying in compliance, improving security and driving down costs.  For many, the solution to that problem is to host their applications with a Cloud Services provider, but concerns that a hosted application will be less secure continue to cause doubt. Oracle is recognized by Gartner as a leader in the User Provisioning and Identity and Access Governance magic quadrants, and has helped thousands of companies worldwide to secure their enterprise applications and identities.  Now those same world class IDM capabilities are available as a managed service, both for enterprise applications, as well has Oracle hosted applications. --- Listen to our IDM in the cloud podcast to hear Yvonne Wilson, Director of the IDM Practice in Cloud Service, explain how Oracle Managed Services provides IDM as a service ---Selecting OracleManaged Cloud Services to deploy and manage Oracle Identity Management Services is a smart business decision for a variety of reasons. Oracle hosted Identity Management infrastructure is deployed securely, resilient to failures, and supported by Oracle experts. In addition, Oracle  Managed Cloud Services monitors customer solutions from several perspectives to ensure they continue to work smoothly over time. Customers gain the benefit of Oracle Identity Management expertise to achieve predictable and effective results for their organization.Customers can select Oracle to host and manage any number of Oracle IDM products as a service as well as other Oracle’s security products, providing a flexible, cost effective alternative to onsite hardware and software costs.Security is a major concern for all organizations- making it increasingly important to partner with a company like Oracle to ensure consistency and a layered approach to security and compliance when selecting a cloud provider.  Oracle Cloud Service makes this possible for our customers by taking away the headache and complexity of managing Identity management infrastructure and other security solutions. For more information:http://www.oracle.com/us/solutions/cloud/managed-cloud-services/overview/index.htmlTwitter-https://twitter.com/OracleCloudZoneFacebook - http://www.facebook.com/OracleCloudComputing

    Read the article

  • New technical product guide for Sun Ray clients

    - by Jaap
    In the Oracle online documentation system a new Sun Ray clients Technical Product guide has been published. The document provides detailed information about the similarities and differences between the three Sun Ray client hardware models: Sun Ray 3, Sun Ray 3 plus and Sun Ray 3i. From the description of the Technical Product guide I want to quote the following section: "......Since Sun Ray 3 Series Clients have no local operating system and require no local management, they eliminate the complexity, expenses, and security vulnerabilities associated with other thin client and PC solutions. ......" This is always one of the great advantages of Sun Ray clients compared to other thin clients (which are actually low-fat PCs where you have to manage thin client OS images). The guide lists the features and technical specifications of the Sun Ray Client such as number of ports, chassis, graphics, network interfaces, power supply, operating conditions, MTBF, reliability, and other standards. The guide also contains a separate chapter about environmental data. As you may know, the Sun Ray 3 Series clients are designed specifically to be sensitive to a spectrum of environmental concerns and standards, from materials to manufacturing processes to shipping, operation, and end of life. The Sun Ray 3 Series clients complies to environmental standards and certifications such as Energy Star 5.0, EPEAT, WEEE and RoHS (see the Oracle policy for RoHS and REACH).

    Read the article

  • IASA ITARC &ndash; Denver May 6th

    - by Jeff Certain
    The Denver chapter of the International Association of Software Architects (IASA) is holding an IT Architect Regional Conference (ITARC) in Denver on May 6th. The speaker list for this conference is amazing. Paul Rayner, Dave McComb, Randy Kahle, Peter Provost, Randy Stafford, George Fairbanks – all great speakers, and from Colorado. Brandon Satrom (who also happens to be the president of the IASA Austin chapter) will also be speaking, as will some other heavy hitters (for example, Ted Farrell, Chief Architect and Senior VP of Oracle). This is an amazing line-up, and the conference is quite reasonably priced ($150 for IASA members until April 10th, including a catered lunch). I also have the privilege of being a presenter at this conference. If you’ve ever heard any of the previously named speakers, you know that they set the bar quite high. Sounds like I’m going to have to step up my game. What I get to talk about is really cool stuff. The company I work for – Colorado CustomWare – brought me on board nearly two years ago. To say there was some technical debt is somewhat… understated. Equally understated would be that management is committed to doing the right thing. Over the past two years, we’ve done significant architectural refactoring – including an effort that took the entire team offline for most of a month. We’ve reduced the application size by 50% without losing functionality. As you can imagine, this has reduced the complexity of the application, making development faster and less prone to bugs. We’ve made many other changes – moving to an agile process, training developers, moving towards a more OO architecture. The changes we’ve made reveal, in some ways, just how far afield we were.. and there are still more changes to be made. Amazingly enough, our leadership team is eager for me to share these experiences with other architects. I’m really looking forward to being able to do so.

    Read the article

  • Today's Links (6/21/2011)

    - by Bob Rhubart
    Keeping your process clean: Hiding technology complexity behind a service | Izaak de Hullu Izaak de Hullu offers a solution to "technology pollution like exception handling, technology adapters and correlation." WebLogic Weekly for June 20th, 2011 | James Bayer James Bayer presents "a round-up what has been going on in WebLogic over the past week." Publish to EDN from Java & OSB with JMS | Edwin Biemond Busy blogger and Oracle ACE Edwin Biemond shows "how you can publish events from Java and OSB." How is HTML 5 changing web development? | Audrey Watters - O'Reilly Radar In this interview, OSCON speaker Remy Sharp discusses HTML5's current usage and how it could influence the future of web apps and browsers. SOA Governance Book | SOA Partner Community Blog Information on how those in EMEA can win a free copy of SOA Governance: Governing Shared Services On-Premise and in the Cloud by Thomas Erl, et al. Keeping The Faith on 11i | Floyd Teter "The iceberg is melting, the curtain is coming down, the lights are dimming, the fat lady is singing," says Oracle ACE Director Floyd Teter. Configure and test JMS based EDN in SOA Suite 11g | Edwin Biemond Oracle ACE Edwin Biemond shows you "how to configure EDN-JMS and how to publish an Event to this JMS Queue." Choosing the best way for SOA Suite and Oracle Service Bus to interact with the Oracle Database | Lucas Jellema Oracle ACE Director Lucas Jellema illustrates "over 20 different interaction channels" covering "a fairly wild variation of attributes, required skills, productivity and performance characteristics." Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets | Alex Kotopoulis ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. Java Spotlight Podcast Episode 35: JVM Performance and Quality Featuring an interview with Vladimir Ivanov, Ivan Krylov, and Sergey Kuksenko on the JDK 7 Java Virtual Machine performance and quality. Also includes the Java All Star Developer Panel featuring Dalibor Topic, Java Free and Open Source Software Ambassador, and Alexis Moussine-Pouchkine, Java EE Developer Advocate.

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • A Multi-Channel Contact Center Can Reduce Total Cost of Ownership

    - by Tom Floodeen
    In order to remain competitive in today’s market, CRM customers need to provide feature-rich superior call center experience to their customers across all communication channels while improving their service agent productivity. They also require their call center to be deeply integrated with their CRM system; and they need to implement all this quickly, seamlessly, and without breaking the bank. Oracle’s Siebel Customer Relationship Management (CRM) is the world’s leading application suite for automated customer-facing operations for Sales and Marketing and for managing all aspects of providing service to customers. Oracle’s Contact On Demand (COD) is a world-class carrier grade hosted multi-channel contact center solution that can be deployed in days without up-front capital expenditures or integration costs. Agents can work efficiently from anywhere in the world with 360-degree views into customer interactions and real-time business intelligence. Customers gain from rapid and personalized sales and service, while organizations can dramatically reduce costs and increase revenues Oracle’s latest update of Siebel CRM now comes pre-integrated with Oracle’s Contact On Demand. This solution seamlessly runs fully-functional contact center provided by a single vendor, significantly reducing your total cost of ownership. This solution supports Siebel 7.8 and higher for Voice and Siebel 8.1 and higher for Voice and Siebel CRM Chat.  The impressive feature list of Oracle’s COD solution includes full-control CTI toolbar with Voice, Chat, and Click to Dial features.  It also includes context-sensitive screens, automated desktops, built-in IVR, Multidimensional routing, Supervisor and Quality monitoring, and Instant Provisioning. The solution also ships with Extensible Web Services interface for implementing more complex business processes. Click here to learn how to reduce complexity and total cost of ownership of your contact center. Contact Ann Singh at [email protected] for additional information.

    Read the article

  • Style bits vs. Separate bool's

    - by peterchen
    My main platform (WinAPI) still heavily uses bits for control styles etc. (example). When introducing custom controls, I'm permanently wondering whether to follow that style or rather use individual bool's. Let's pit them against each other: enum EMyCtrlStyles { mcsUseFileIcon = 1, mcsTruncateFileName = 2, mcsUseShellContextMenu = 4, }; void SetStyle(DWORD mcsStyle); void ModifyStyle(DWORD mcsRemove, DWORD mcsAdd); DWORD GetStyle() const; ... ctrl.SetStyle(mcsUseFileIcon | mcsUseShellContextMenu); vs. CMyCtrl & SetUseFileIcon(bool enable = true); bool GetUseFileIcon() const; CMyCtrl & SetTruncteFileName(bool enable = true); bool GetTruncteFileName() const; CMyCtrl & SetUseShellContextMenu(bool enable = true); bool GetUseShellContextMenu() const; ctrl.SetUseFileIcon().SetUseShellContextMenu(); As I see it, Pro Style Bits Consistent with platform less library code (without gaining complexity), less places to modify for adding a new style less caller code (without losing notable readability) easier to use in some scenarios (e.g. remembering / transferring settings) Binary API remains stable if new style bits are introduced Now, the first and the last are minor in most cases. Pro Individual booleans Intellisense and refactoring tools reduce the "less typing" effort Single Purpose Entities more literate code (as in "flows more like a sentence") No change of paradim for non-bool properties These sound more modern, but also "soft" advantages. I must admit the "platform consistency" is much more enticing than I could justify, the less code without losing much quality is a nice bonus. 1. What do you prefer? Subjectively, for writing the library, or for writing client code? 2. Any (semi-) objective statements, studies, etc.?

    Read the article

  • links for 2011-02-03

    - by Bob Rhubart
    Webcast: Reduce Complexity and Cost with Application Integration and SOA Speakers: Bruce Tierney (Product Director, Oracle Fusion Middleware) and Rajendran Rajaram (Oracle Technical Consultant). Thursday, February 17, 2011. 10 a.m. PT/1 p.m. ET. (tags: oracle otn soa fusionmiddleware) William Vambenepe: The API, the whole API and nothing but the API William asks: "When programming against a remote service, do you like to be provided with a library (or service stub) or do you prefer 'the API, the whole API, nothing but the API?'" (tags: oracle otn API webservices soa) Gary Myers: Fluffy white Oracle clouds by the hour Gary says: "Pay-by-the-hour options are becoming more common, with Amazon and Oracle are getting even more intimate in the next few months. Yes, you too will be able to pay for a quickie with the king of databases (or queen if you prefer that as a mental image). " (tags: oracle otn cloudcomputing amazon ec2) Conversation as User Assistance (the user assistance experience) "To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies." -- Ultan O'Broin (tags: oracle otn enterprise2.0 userexperience) Webcast: Oracle WebCenter Suite – Giving Users a Modern Experience Thursday, February 10, 2011. 11 a.m. PT/2 p.m. ET. Speakers: Vince Casarez, Vice President of Enterprise 2.0 Product Management, Oracle; Erin Smith, Consulting Practice Manager – Portals, Oracle; Robert Wessa, Consulting Technical Director,  Enterprise 2.0 Infrastructure, Oracle.  (tags: oracle otn enterprise2.0 webcenter)

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >