Search Results

Search found 1848 results on 74 pages for 'significant'.

Page 5/74 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • Social-Networking Startup, Hosting Plan

    - by pws5068
    I've created a social networking community which is soon ready to release, and I'm trying to decide on a type of hosting plan. I have considered options such as VPS and Reseller plans. I anticipate (or hope for at least) a significant amount of traffic/bandwidth in the not-too-distant future. If I open a reseller, will I receive the same amount of server lag during busy hours that I do with a shared account? How significant is the profit margin with the reseller option? Aside from generalized "configurability", what advantages merit purchasing a VPS? Is there anything stopping me from reselling space on a VPS account? Features I need Include: PHP, MySql, Unlimited Domains, Ruby on Rails, Remote Database Connections

    Read the article

  • What’s New in JD Edwards EnterpriseOne Release 9.1

    - by Breanne Cooley
    Oracle JD Edwards EnterpriseOne 9.1 offers customers significant updates to usability and business processes to improve productivity and bolster business value. This release addresses critical user needs, while delivering key enhancements in several areas, including:  New User Experience Release 9.1 offers significant enhancements to the user experience. New Web 2.0 features reduce task time and enable you to access meaningful information. Enhanced Reporting Oracle’s JD Edwards EnterpriseOne One View Reporting is a breakthrough solution that allows business users to create interactive reports - without IT support. Industry Specific Functionality This new release delivers key enhancements for the consumer goods, real estate management and manufacturing and distribution industries. Enhanced Support for Global Operations JD Edwards EnterpriseOne 9.1 supports global operations with several new features, including enhancements that consider the entire ERP business process associated with managing country of origin requirements. Productivity Features This new release offers new more tightly integrated business processes and other productivity advancements like improved data access and enhanced financial controls. Want to find out more? ü Bookmark the JD Edwards EnterpriseOne web page ü Listen to the Podcast: Announcing JD Edwards EnterpriseOne 9.1  ü Watch the Demo: JD Edwards EnterpriseOne 9.1 Features Demo ü Watch the Demo: JD Edwards EnterpriseOne One View Reporting Demo  ü Review the Data Sheet: JD Edwards EnterpriseOne Tools 9.1  New Training JD Edwards EnterpriseOne 9.1 training through Oracle University is designed to help you leverage these usability and productivity enhancements. The curriculum is aligned to the JD Edwards EnterpriseOne products and will teach your team how to efficiently and effectively implement and use your applications. Get started today! View available training and schedules.   -Jim Vonick, Oracle University Market Development Manager 

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • Oracle VM VirtualBox 4.0 Now Available

    - by Paulo Folgado
    Delivering on Oracle's commitment to open source, Oracle VM VirtualBox 4.0 is now available, further enhancing the popular, open source, cross-platform virtualization software.   "Oracle VM VirtualBox 4.0 is the third major product release in just over a year, and adds to the many new product releases across the Oracle Virtualization product line, illustrating the investment and importance that Oracle places on providing a comprehensive desktop to datacenter virtualization solution," says Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, Oracle. "With an improved user interface and added virtual hardware support, customers will find Oracle VM VirtualBox 4.0 provides a richer user experience." Part of Oracle's comprehensive portfolio of virtualization solutions, Oracle VM VirtualBox enables desktop or laptop computers to run multiple guest operating systems simultaneously, allowing users to get the most flexibility and utilization out of their PCs, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Oracle VM VirtualBox 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners, and developers. Highlights of Oracle VM VirtualBox 4.0 include New Open Architecture - Oracle and community developers can now create extensions that customize Oracle VM VirtualBox and add features not previously available.Enhanced Usability - A new scalable display mode enables users to view more virtual displays on their existing monitors. Improvements to VM management, including visual VM previews, an optional attributes display, and easy launch shortcut creation enables administrators and power users to customize the interface to make it as simple or as comprehensive as required.Increased Capacity and Throughput - A new asynchronous I/O model for networked (iSCSI) and local storage delivers significant storage related performance improvements, while new optimizations allow larger datacenter-class workloads, such as Oracle's middeware, to be run on 32-bit Windows hosts for testing and demo purposes. Powerful Virtual Appliance Sharing Capabilities - Enhanced support for standards-compliant OVF appliances and added support for OVA format descriptors. All information about a VM may be stored in a single folder to facilitate easier direct sharing among VMs. Support for Latest Virtual Hardware - A new, modern virtual chipset supporting PCI Express and other hardware enhancements including high-definition audio devices helps ensure support for the most demanding virtual workloads.

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Fresh install on SSD with Ubuntu and Windows Vista, using whole disk encryption for Ubuntu

    - by nategator
    I would like to do a fresh install on a OCZ Vertex Plus R2 SSD 60GB drive I purchased on the cheap. Since the AES-encryption looks like it may not work optimally for this drive, I would like to set up a dual-boot to Windows Vista (the only Windows copy I have for clean install purposes) and Ubuntu 12.04 with the best encryption scheme possible. My plan is to have Windows around just in case I need to use a program that won't work with Wine and Ubuntu as my daily OS with all of my information secured in case the laptop is ever stolen or sold. Although this setup will not provide a lot of space, I think I can squeeze both OSes and have enough for second-computer office tasks. So, my questions are: Which OS should I install first, Ubuntu or Vista? Any special considerations when partitioning the drive? How should I install Ubuntu to ensure full disk encryption for the Linux partition(s) and or my daily computing? Is there a significant performance upgrade with doing a solo install of Ubuntu instead of a dual boot setup? Will TRIM, for example, work correctly? Are there any significant security concerns with going the route of a dual-boot, other than the fact that any activity on Windows may be fully recoverable if the drive is stolen or sold? Thanks in advance!

    Read the article

  • JCP 2012 Award Nominations Announced

    - by heathervc
      The 10th Annual JCP Program Award Nominations have been posted on JCP.org.  The community gets together every year during JavaOne to congratulate the winners and nominees at the JCP Community Party held in San Francisco. This year there are three awards: JCP Member/Participant of the Year, Outstanding Spec Lead, and Most Significant JSR. Member of the Year: Stephen Colebourne Markus Eisele Google JUG Chennai Werner Keil London Java Community and SouJava Antoine Sabot-Durand Outstanding Spec Lead Michael Ernst, JSR 308, Annotations on Java Types Victor Grazi, Credit Suisse, JSR 354, Money and Currency API Nigel Deakin, Oracle, JSR 343, Java Message Service 2.0 Pete Muir, Red Hat, JSR 346, Contexts and Dependency Injection for Java EE 1.1 Most Significant JSR API for JSON Processing, JSR 353 Money and Currency API, JSR  354 Java State Management, JSR 350 Java Message Service 2, JSR 343 JCP.Next, JSR 348, JSR 355, and JSR 358 Congratulations to the nominees; you can read the nomination text and more information about the awards here.  And remember to join us on Tuesday, 2 October at the Infusion Lounge to celebrate with the winners and nominees!

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Differences in IE8 behavior between XP, Vista, Win7?

    - by Piskvor
    Is there any significant difference in behavior (HTML, CSS, Javascript, ...) with Internet Explorer 8 on different operating systems? In other words, will a web page work the same way across IE8+XP, IE8+Vista and IE8+Win7, or are there some significant differences? (I'm aware that installed plugins and fonts will have an impact, but that's a bit outside my scope at the moment; assuming compatibility mode X-UA-Compatible: IE=8 or edge) Although The IEBlog contains very useful information, I haven't found this data there - so I'm assuming that there should not be any difference. However, search has turned up this (vague) question: "IE8 on XP: looks great! IE8 on Vista: looks terrible". Will have to check IE8+{XP,V,7} in VM in the meantime.

    Read the article

  • Fastest way to calculate an X-bit bitmask?

    - by Virtlink
    I have been trying to solve this problem for a while, but couldn't with just integer arithmetic and bitwise operators. However, I think its possible and it should be fairly easy. What am I missing? The problem: to get an integer value of arbitrary length (this is not relevant to the problem) with it's X least significant bits sets to 1 and the rest to 0. For example, given the number 31, I need to get an integer value which equals 0x7FFFFFFF (31 least significant bits are 1 and the rest zeros). Of course, using a loop OR-ing a shifted 1 to an integer X times will do the job. But that's not the solution I'm looking for. It should be more in the direction of (X << Y - 1), thus using no loops.

    Read the article

  • No speed-up with useless printf's using OpenMP

    - by t2k32316
    I just wrote my first OpenMP program that parallelizes a simple for loop. I ran the code on my dual core machine and saw some speed up when going from 1 thread to 2 threads. However, I ran the same code on a school linux server and saw no speed-up. After trying different things, I finally realized that removing some useless printf statements caused the code to have significant speed-up. Below is the main part of the code that I parallelized: #pragma omp parallel for private(i) for(i = 2; i <= n; i++) { printf("useless statement"); prime[i-2] = is_prime(i); } I guess that the implementation of printf has significant overhead that OpenMP must be duplicating with each thread. What causes this overhead and why can OpenMP not overcome it?

    Read the article

  • silverlight for .NET / CLR based numerical computing on osx

    - by Jonathan Shore
    I'm interested in using F# for numerical work, but my platforms are not windows based. Mono still has a significant performance penalty for programs that generate a significant amount of short-lived objects (as would be typical for functional languages). Silverlight is available on OSX. I had seen some reference indicating that assemblies compiled in the usual way could not be referenced, but not clear on the details. I'm not interested in UIs, but wondering whether could use the VM bundled with silverlight effectively for execution? I would want to be able to reference a large library of numerical models I already have in java (cross-compiled via IKVM to .NET assemblies) and a new codebase written in F#. My hope would be that the silverlight VM on OSX has good performance and can reference external assemblies and native libraries. Is this doable?

    Read the article

  • Endianness inside CPU registers

    - by Abhishek Tamhane
    I need help understanding endianness inside CPU registers of x86 processors. I wrote this small assembly program: section .data section .bss section .text global _start _start: nop mov eax, 0x78FF5ABC mov ebx,'WXYZ' nop ; GDB breakpoint here. mov eax, 1 mov ebx, 0 int 0x80 I ran this program in GDB with a breakpoint on line number 10 (commented in the source above). At this breakpoint, info registers shows the value of eax=0x78ff5abc and ebx=0x5a595857. Since the ASCII codes for W, X, Y, Z are 57, 58, 59, 5A respectively; and intel is little endian, 0x5a595857 seems like the correct byte order (least significant byte first). Why isn't then the output for eax register 0xbc5aff78 (least significant byte of the number 0x78ff5abc first) instead of 0x78ff5abc?

    Read the article

  • C++ long long manipulation

    - by Krakkos
    Given 2 32bit ints iMSB and iLSB int iMSB = 12345678; // Most Significant Bits of file size in Bytes int iLSB = 87654321; // Least Significant Bits of file size in Bytes the long long form would be... // Always positive so use 31 bts long long full_size = ((long long)iMSB << 31); full_size += (long long)(iLSB); Now.. I don't need that much precision (that exact number of bytes), so, how can I convert the file size to MiBytes to 3 decimal places and convert to a string... tried this... long double file_size_megs = file_size_bytes / (1024 * 1024); char strNumber[20]; sprintf(strNumber, "%ld", file_size_megs); ... but dosen't seem to work. i.e. 1234567899878Bytes = 1177375.698MiB ??

    Read the article

  • Storing script files outside web root

    - by memilanuk
    I've seen recommendations to store some or all php include files some place other than in the web document root directory (username/public_html in my case) for the specific reason of protecting php files with sensitive information (like database connection and login info) in the event that the web server hiccups and stops protecting php files and they become 'visible' to outsiders who know where to look. It seems somewhat paranoid to me, but I'm guessing people have gotten burned badly on this before so I'm willing to go along. The suggestion usually takes the form of having the include files in something like '../include_files/' so its not directly in the document root and not directly accessible to outsiders through the web server. My question is this: is there a significant difference in security between that way and just putting your 'include_files' directory under the document root and sticking an .htaccess file in there (with the appropriate entries)? Would putting an .htaccess file in '../include_files/' make any significant improvement there? TIA, Monte

    Read the article

  • Standard (cross-platform) way for bit manipulation

    - by Kiril Kirov
    As are are different binary representation of the numbers (for example, take big/little endian), is this cross-platform: some_unsigned_type variable = some_number; // set n-th bit, starting from 1, // right-to-left (least significant-to most significant) variable |= ( 1 << ( n - 1 ) ); // clear the same bit: variable &= ~( 1 << ( n - 1 ) ); In other words, does the compiler always take care of the different binary representation of the unsigned numbers, or it's platform-specific? And what if variable is signed integral type (for example, int) and its value is zero positive negative? What does the Standard say about this? P.S. And, yes, I'm interesting in both - C and C++, please don't tell me they are different languages, because I know this :) I can paste real example, if needed, but the post will become too long

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >