Search Results

Search found 6869 results on 275 pages for 'tek systems'.

Page 105/275 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • How To: Use Monitoring Rules and Policies

    - by Owen Allen
    One of Ops Center's most useful features is its asset monitoring capability. When you discover an asset - an operating system, say, or a server - a default monitoring policy is applied to it, based on the asset type. This policy contains rules that specify what properties are monitored and what thresholds are considered significant. Ops Center will send a notification if a monitored asset passes one of the specified thresholds. But sometimes you want different assets to be monitored in different ways. For example, you might have a group of mission-critical systems, for which you want to be notified immediately if their file system usage rises above a specific threshold. You can do so by creating a new monitoring policy and applying it to the group. You can also apply monitoring policies to individual assets, and edit them to meet the requirements of your environment. The Tuning Monitoring Rules and Policies How-To walks you through all of these procedures.

    Read the article

  • New Year, New Position, New Opportunity and Adventures!

    - by D'Arcy Lussier
    2010 was an incredible year of change for me. On the personal side, we celebrated our youngest daughter’s first birthday and welcomed my oldest daughter into our family (both my girls are adopted). Professionally, I put on the first ever Prairie Developer Conference, the 3rd annual Winnipeg Code Camp, the Software Development and Evolution Conference, continued to build the technology community in Winnipeg, was awarded a Microsoft MVP award for the 4th year, created a certification program to help my employer, Protegra, attain Microsoft Partner status, and had great project work throughout the year. So now its 2011, and I’m looking ahead to new challenges and opportunities with a new employer. Starting in mid February I’ll be the Microsoft Practice Lead with Online Business Systems, a Microsoft partner here in Winnipeg! I’m very excited about working with such great people and helping continue delivering quality solutions and consulting that the organization has become known for. 2010 was great, but 2011 is shaping up to be a banner year both personally and professionally!

    Read the article

  • 7 Ubuntu File Manager Features You May Not Have Noticed

    - by Chris Hoffman
    The Nautilus file manager included with Ubuntu includes some useful features you may not notice unless you go looking for them. You can create saved searches, mount remote file systems, use tabs in your file manager, and more. Ubuntu’s file manager also includes built-in support for sharing folders on your local network – the Sharing Options dialog creates and configures network shares compatible with both Linux and Windows machines. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • How is technical debt best measured? What metric(s) are most useful?

    - by throp
    If I wanted to help a customer understand the degree of technical debt in his application, what would be the best metric to use? I've stumbled across Erik Doernenburg's code toxicity, and also Sonar's technical debt plugin, but was wondering what others exist. Ideally, I'd like to say "system A has a score of 100 whereas system B has a score of 50, so system A will most likely be more difficult to maintain than system B". Obviously, I understand that boiling down a complex concepts like "technical debt" or "maintainability" into a single number might be misleading or inaccurate (in some cases), however I need a simple way to convey the to a customer (who is not hands-on in the code) roughly how much technical debt is built into their system (relative to other systems), for the goal of building a case for refactoring/unit tests/etc. Again, I'm looking for one single number/graph/visualization, and not a comprehensive list of violations (e.g. CheckStyle, PMD, etc.).

    Read the article

  • Oracle + Sun Product Strategy Webcast Series

    - by Paulo Folgado
    The Oracle + Sun Product Strategy Webcast series is composed of informative, on-demand sessions that offer strategies for Sun's major product lines related to the company combination, explain how Oracle will deliver more innovation to our customers, and outline our approach to protecting customers' investments. Ranging from 5 to 27 minutes each, the Webcasts cover the strategies for hardware, systems, software, solutions, and partners.In addition, Judson Althoff, SVP, Worldwide Alliances and Channels, Oracle, followed up the Webcast series with a video FAQ to help answer the following top partner questions about the Oracle + Sun combination and the OPN Specialized program: What is the impact the overall combined company will have on the partners?What are Oracle's plans for selling direct and what is the impact to partners?How will Sun partners integrate into OPN Specialized?As a Sun partner, am I automatically migrated into OPN Specialized?Will Oracle continue to partner with other hardware vendors?How will Oracle map existing Sun investments and certifications into OPN Specialized?As a Sun partner new to Oracle, where should I be placing my focus?What can partners expect to see relative to Exadata V2?How do content delivery platforms (CDPs) fit into the Oracle framework?How do existing Sun Partners place orders?

    Read the article

  • Oracle OpenWorld Update -- General Session: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Ruma Sanyal
    Today we kick it off with a fantastic general session focused on Fusion Middleware by Hasan Rizvi. Oracle Fusion Middleware is the leading business innovation platform for the enterprise and the cloud. Innovative businesses today are utilizing new platform technologies for their enterprise applications—embracing social, mobile, and cloud technologies. Convergence of these three technologies opens the door for business innovation—changing how customers interact, employees collaborate, and IT manages services. Successful adoption requires a comprehensive middleware platform that delivers secure multichannel user experiences, integrates back-end systems, and supports flexible deployment. In this general session, hear from Hasan Rizvi, and many of our customers how they leverage new innovations in their applications and customers achieve their business innovation goals with Oracle Fusion Middleware. For more information about this and other Fusion Middleware sessions, review the Oracle Fusion Middleware Focus On document. Details: Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone North - Hall D  

    Read the article

  • Battling Emacs Pinky?

    - by haziz
    My problem is not so much emacs pinky as much as having to work with multiple machines, across 3 operating systems, both desktop and laptop, with differing keyboard layouts and different locations for Ctrl and Alt\Meta keys so I often have to pause and think about where is the Ctrl key on this machine. How do you deal with varying keyboard layouts, between Mac keyboards (mostly the laptops) and PC keyboards (mostly 101 keys in my case, yes the original PC keyboard)? I have turned the Caps lock Key into a Ctrl key (losing the Caps lock function completely rather than swapping with Ctrl) on most of them but still find myself hunting for the original Ctrl labeled key most of the time. How do you deal with this keyboard confusion? Suggestions, ideas and feedback welcome.

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • What can I do with the twitter API?

    - by aditya menon
    I've tried googling for this but could not find concrete developer examples. When building mundane daily web applications like Classified websites, Job boards or Intranet targeted Document Management Systems, how can the twitter API help me do more things. May I please have some examples on how developers have used twitter to make their apps better? Other than the obvious use for promotional and search engine optimization purpose (yay there's a new job post on our site), what can I do with it? Also, am I late to the party? I hear a lot of upset on the internet about how twitter is apparently slowly betraying developers (I don't understand the specifics), so should I even look at the system or consider alternatives?

    Read the article

  • AutoVue for Mobile Asset Management

    - by Warren Baird
    Our partner VentureForth is working with us on a solution to use AutoVue in their mobile asset management system. The idea is to allow workers in the field to easily be able to access visual information they might need while doing their work - and to annotate them.   For instance the worker can take a photo of something unexpected they found on-site, and indicate on the drawing what they found, so there's a record of it for the future.VentureForth's vMobile suite allows workers to take information for EBS eAM and PeopleSoft CAM, as well as multiple other systems) offline with them while working in the field, and with the AutoVue integration the workers can then view and annotate CAD drawings or photographs and have that information saved back into the asset management system.VentureForth has posted a video of this in action on YouTube:  http://youtu.be/X5Lv_X2v7uc - Check it out and let us know what your thoughts are!Learn more about VentureForth at http://ventureforth.com/

    Read the article

  • Announcement Oracle Solaris Cluster 4.1 Availability!

    - by uwes
    On 26th of October Oracle announced the availability of Oracle Solaris Cluster 4.1. Highlights include: New Oracle Solaris 10 Zone Clusters: customers can now consolidate mission critical Oracle Solaris 10 applications on Oracle Solaris 11 virtualized systems in a virtual cluster Expanded disaster recovery operations: Oracle Solaris Cluster now offers managed switchover and disaster-recovery takeover of applications and data using ZFS Storage Appliance replication services in a multi-site, multi-custer configuration Faster application recovery with improved storage failure detection and resource dependencies management New labeled security environment for mission-critical deployments in Oracle Solaris Zone Clusters with Oracle Solaris 11 Trusted Extensions Learn more about Oracle Solaris Cluster 4.1: What's New in Oracle Solaris 4.1 Oracle Solaris Cluster 4.1 FAQ Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster page Resouces for downloading: Oracle Solaris Cluster 4.1 download or order a media kit Existing Oracle Solaris Cluster 4.0 customers can quickly and simply update by using the network based repository.   Note: This repository requires keys and certificates which can be obtained here.

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Is there a very simple XML Editor/Viewer?

    - by Paul Spangle
    Part of our support job is to look at XML documents sent to our systems by our clients that have failed to load correctly. I do this by pasting the file in to Visual Studio as an XML file and VS colours it and adds line-breaks, indenting, etc. and it's then usually quite simple to spot the error and manually fix it and resubmit the document. Do I really have to use Visual Studio to do this? OK, it does the job quite well, but I'd have thought that something like Notepad++ would do the job just as well with a fraction of the machine resource usage. Is there are plugin for Notepad++ or another bit of freeware that can make XML look pretty and let me make simple edits?

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He shared his story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish. You can check out the details of his story on the GlassFish stories blog. Do you have a Java EE/GlassFish adoption story to share? Let us know and we will highlight it for the community.

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He lives in Nashville, TN where he helps organize the Nashville Java User Group. Kerry shared his Java EE/GlassFish adoption story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Here is the slide deck for his talk: GlassFish Story by Kerry Wilson/Vanderbilt University Medical Center from glassfish Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish.

    Read the article

  • April 18: Learn about Oracle Hyperion Data Relationship Management

    - by Theresa Hickman
    Do you have multiple charts of accounts on different application instances? Would you like an easy way to synchronize your charts of accounts across instances? If you answered yes, then please join us in an informal reference call with Johnson Controls who were able to synchronize their charts of accounts across 5 HFM (Hyperion Financial Management) instances using Hyperion Data Relationship Management (DRM). Johnson Controls is a global technology and industrial leader with 162,000 employees, serving customers in more than 150 countries. This call will include a brief overview of Johnson Controls and their solution followed by a candid discussion and an open question and answer session. When: April 18, 2012 Time: 8:00 am PST Duration: 1 Hour Speaker: Raymond Chontos, HFM Application Manager Global Financial Systems Click here to register.

    Read the article

  • BPM 11.1.1.5 for Apps: BPM for EBS Demo available

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert Demo Highlights This demo showcases BPM integration with E-Business Suite BPM Process Spaces, providing role-based dashboards and monitoring EBS processes Automated workflow generation, enforcement of business rules Seamless integration with E-Business Suite-iExpense module using SOA Worklist approvals via a mobile device Demo Architecture  & Demo Collateral & OFM Demos Corner & DSS Offerings & Scheduling Demos on DSS & DSS Support SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM11g,BPM demo,dss SOA,BPM Suite,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Sterci today announced it has earned Oracle Exadata and Oracle Exalogic Optimized status

    - by Javier Puerta
    Sterci has announced it has earned Oracle Exadata and Oracle Exalogic Optimized status. (Read full announcement here) "GTExchange from Sterci is a high-performance multi-network and multi-standard financial messaging solution that provides a comprehensive connection hub to SWIFT and other networks, as well as handling internal message transfer. It supports high volume and complex message flows from multiple counterparties, delivering control, transparency and proven efficiencies. By achieving Oracle Exadata Optimized and Oracle Exalogic Optimized status, Sterci has shown that its GTExchange solution has achieved a 3.8 x greater throughput (nearly 4 million messages an hour), than any previous tests on comparable x86 systems." 

    Read the article

  • Resources for Test Driven Development in Web Applications?

    - by HorusKol
    I would like to try and implement some TDD in our web applications to reduce regressions and improve release quality, but I'm not convinced at how well automated testing can perform with something as fluffy as web applications. I've read about and tried TDD and unit testing, but the examples are 'solid' and rather simple functionalities like currency converters, and so on. Are there any resources that can help with unit testing content management and publication systems? How about unit testing a shopping cart/store (physical and online products)? AJAX? Googling for "Web Test Driven Development" just gets me old articles from several years ago either covering the same examples of calculator-like function or discussions about why TDD is better than anything (without any examples).

    Read the article

  • OS Analytics Post and Discussion

    - by Owen Allen
    Eran Steiner has written an interesting piece over on the Enterprise Manager blog about the OS Analytics feature of Ops Center. OS Analytics gives you a huge amount of information about the characteristics of managed operating systems and lets you track changes to these characteristics over time. Take a look; it's a useful feature. The OS Analytics feature is also the subject of the community call this week (Eran is leading that one too). It's at 11 am EST. To join the conference: Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D If requested, enter your name and email address. If a password is required, enter the meeting password: oracle123 Click Join. To dial into the conference, dial 1-866-682-4770 (US/Canada) or go here for the numbers in other countries. The conference code is 7629343# and the security code is 7777#.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Accenture Launches Smart Grid Data Management Platform

    - by caroline.yu
    Accenture announced today it has launched the Accenture Intelligent Network Data Enterprise (INDE), a data management platform to help utilities design, deploy and manage smart grids. INDE's functionality can be enabled by an array of third party technologies. In addition, Accenture plans to offer utilities the option of implementing the INDE solution based on a pre-configured suite of Oracle technologies. The Oracle-based version of INDE will accelerate the design of smart grids and help reduce the costs and risks associated with smart grid implementation. Stephan Scholl, Senior Vice President and General Manager of Oracle Utilities said, "Oracle and Accenture share a common vision of how the smart grid will enable more efficient energy choices for utilities and their customers. Our combined expertise in delivering mission-critical smart grid applications, security, data management and systems integration can help accelerate utilities toward a more intelligent network now and as future needs arise." For the full press release, click here.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >