Search Results

Search found 11913 results on 477 pages for 'fail fast fail early'.

Page 243/477 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Web application stopped behaving normally after migration from Dedicated servers to Ec2 servers

    - by sunny
    Web application stopped behaving normally after migration from Dedicated servers to Ec2 servers Old Dedicated Server configuration systemtype: 32 bit operating system. RAM: 3.99 Gb Ram Processer : 1.86 GHZ New Servers in EC2 systemtype: 64 bit operating system. RAM: 3.99 Gb Ram Processer : 2.73 GHZ - 2.31 GHZ Everthing is working fine in our production server. But as we migrated our web application from old servers to new servers and transferred the entire network traffice to new servers.Site suddenly stopped behaving abnormally. Sometimes it's super fast Some times slow. sometimes normal some times super slow and sometimes no response This all above happens with a time interval or around 2 - 3 minutes. This went on happening 8 - 10 hours. Few differences in old and new servers are Old servers are using II6 and New servers are using IIS 7.5. We are using exactly the same code in the old and new servers. Even the ec2 servers are having higher CPU then older servers but still having lower. But not sure how this is happned. Please suggets your views...

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • Oracle University New Courses (Week 35)

    - by swalker
    Oracle University released the following new (versions of) courses recently: Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle SOA Suite 11g: Essential Concepts (Training on Demand) e-Business Suite R12 Oracle HRMS iRecruitment Fundamentals (Self-Study Course) R12 Oracle Payroll Fundamentals: Administration (Self-Study Course) R12 Oracle HRMS System Administration Fundamentals (Self-Study Course) R12 Oracle HRMS Self Service Fundamentals (Self-Study Course) R12 Oracle HRMS Implement and Use Fast Formula (Self-Study Course) R12 HRMS Work Structures Fundamentals (Self-Study Course) R12 HRMS Total Compensation Foundations (Self-Study Course) Siebel Siebel 8.1.x Chat and Voice Integration Using CCA (Self-Study Course) Siebel 8.1.x Search using Oracle Secure Enterprise Search (Self-Study Course) Siebel 8.1.x COM Web Services (Self-Study Course) Siebel 8.1.x COM Asset Based Order Management (Self-Study Course) Siebel 8.1.x COM: What is New in Product Configurator (Self-Study Course) Siebel 8.1.x COM Product Configurator Caching & Performance Management (Self-Study Course) Siebel 8.1.x COM PSP Engine Caching and Performance Management (Self-Study Course) Siebel 8.1.x Remote: Administration (Self-Study Course) Siebel 8.1.x Remote: Technical Foundations (Self-Study Course) Siebel Tools: Configuring Chart and Tree Applets (Self-Study Course) Sun - Server Administration SPARC SuperCluster Administration and Maintenance Seminar (2 days) OPN Only Sparc T4-Based Servers Installation Boot Camp (1 day) Primavera Primavera P6 Application Administration Rel 8.x (2 days) Oracle Retail Retail Merchandising System (RMS) Business Overview (Self-Study Course) Retail Invoice Matching (ReIM) Product Overview (Self-Study Course) Retail Invoice Matching (ReIM) Business Introduction (Self-Study Course) Retail Demand Forecasting: RDF Classic Product Overview (Self-Study Course) Retail Demand Forecasting Introduction (Self-Study Course) Retail Data Warehouse (RDW) Overview 13.1 (Self-Study Course) Oracle Retail Point-of-Service (POS) Product Overview (Self-Study Course) Retail Sales Audit (ReSA) Product Overview (Self-Study Course) Retail Price Management (RPM) Product Overview (Self-Study Course) Retail Merchandising System (RMS) Technical Introduction (Self-Study Course) Oracle Retail Integration Bus (RIB) Product Overview (Self-Study Course) Oracle Communiucations Unified Communications Suite Convergence Customization (2 days) OSM Foundations I: Tasks, Processes and Orders Get in contact with your local Oracle University team for more details and course dates. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • New Packt Books: APEX & JRockit

    - by [email protected]
      I have received these 2 ebooks from Packt Publkishing and I am currently reviewing them. Both of them look great so far.   Oracle Application Express 3.2 - The Essentials and More First of all, I have to mention that I am new to APEX. I was interested on this product which is a development tool for Web applications on the Oracle Database. As I support JDeveloper and ADF, which are products that work very closely with the Oracle Database and are a rapid development tool as well, it is always interesting and useful to know complementary tools. APEX looks very useful and the book includes many working examples. A more complete review of this book is coming soon. Further information about this book can be seen at Packt.   Oracle JRockit: The Definitive Guide Many of our Oracle Coherence customers run their caches and clusters using JRockit. This JVM has helped us to solve lots of Service Requests. It is a really reliable, fast and stable JVM. It works great on both development and production environments with big amounts of data, concurrency, multi-threading and many other factors that can make a JVM crash. I must also mention JRockit Mission Control (JRMC), which is a great tool for management and monitoring. I really recommend it. As a matter of fact, some months ago, I created a document entitled "How to Monitor Coherence-Based Applications using JRockit Mission Control" (Doc Id 961617.1) on My Oracle Support. Also, the JRockit Runtime Analyzer (JRA) and it successor of newer versions, the JRockit Flight Recorder (JFR) are deeply reviewed. This book contains very clear and complete information about all this and more. I will post an entry with a more complete review soon (and will probably post an entry about Coherence monitoring with JRMC soon too). Further information about this book can be seen at Packt.  

    Read the article

  • Lenovo Ideapad 205 and Ubuntu 12.04 LTS

    - by Oranges
    I just wanted to report one thing, because I know there are a lot of Ideapad S205 users out there, that had issues running Ubuntu on their Lenovo Ideapad S205 in the past. Lenovo Ideapad with Ubuntu 12.04 LTS: Everything works out of the box. I just installed it and everything, including the microphone, the webcam, the WLAN, the AMD E-350 CPU/GPU and the sound worked right away. In other words: Everything works. Even the AMD E-350 GPU works very well with the Open Source drivers and with the prop. drivers. The prop. drivers were automatically offered to me via Ubuntu itself. It works great. A long idea, written short: I love it. It's extremly stable and very fast. I am using the German version of the Idepad S205, combined with the 32bit version of Ubuntu. There is only one, single thing to keep in mind when installing Ubuntu on the S205: Plugin your USB/CD/DVD drive in the left USB-Port. I don't know why, but it only works with the USB-Port on the left side of the Netbook. This is only for installing Ubuntu. After it has been installed and updated successfully, all of the USB ports will work. Oranges - feedback on Lenovo S205 with Ubuntu 12.04 LTS.

    Read the article

  • Infragistics - Now Available! 2 New Packs and Silverlight CTPs

    Infragistics® glows silver this season as we continue to innovate for the Silverlight 3 platform and deliver stockings stuffed with high performance controls needed to quickly and easily create great user experiences in Silverlight; and two new ICON packs guaranteed to make your applications shine. First Silverlight Pivot Grid Now Available Perfect for working with multi-dimensional data, the xamWebPivotGrid™ presents decision makers with highly-interactive pivoting views of business intelligence. Our new high-performance Silverlight charting control, the xamWebDataChart™ enables blazing fast updates every few milliseconds to charts with millions of data points. Both of these controls are planned for the 2010 Volume 1 release of NetAdvantage for Silverlight Data Visualization. Gift to Silverlight Line of Business Applications You’ll be able to deliver superior user experiences in LOB applications with Silverlight RIA services support, a ZIP compression library, a new control persistence framework, and new Silverlight data grid features like unbound columns and template layouts, plus an Office 2007-style ribbon UI for Silverlight. Tis the Season to Add Some Shine The NetAdvantage ICONS Legal Pack adds a touch of legalese to any application user interface with its rich, legal system-themed graphic icons. The NetAdvantage ICONS Education Pack supplies familiar, academic icons that developers can easily add to software reaching students, educators, schools and universities. Sold in themes Packs, ICON packs that are already available are: Web & Commerce, Healthcare, Office Basics, Business & Finance and Software & Computing. Buy any two of the seven packs for $299 USD (MSRP); 3 packs for $399 USD (MSRP); or sold separately for $199 USD (MSRP) each. For more Product details Contact Infragistics:      +1 (800) 231-8588 In Europe (English Speakers):      +44 (0) 20 8387 1474 En France (en langue française):      +33 (0) 800 667 307 Für Deutschland (Deutscher Sprecher):       0800 368 6381 In India:     +91 (80) 6785 1111 span.fullpost {display:none;}

    Read the article

  • MTN WMS Implementation Story

    - by aditya.agarkar
    MTN is Africa's largest cellular phone company serving millions of customers across 21 countries. MTN uses Oracle WMS to manage its distribution activities and its sizzling growth. Just for perspective, since 2004, Africa has been the fastest growing mobile phone market in the world. If you want to know more about MTN and the WMS Project at MTN, a summarized view of MTN WMS project is here. The WMS Project at MTN was presented at Oracle Open World in 2007. The extensive automation at MTN includes interface with Conveyor for item transport, High Speed Sorter for item routing, Put to Light for packing accuracy, ASRS Carousel/Lift for inventory Security and Storage Optimization, Check Weight Scale for shipping accuracy, Automated Carton Erectors for package creation and Automated Carton Labeling. Subsequent to this presentation and their go-live in 2007, the MTN warehouse has scaled new heights. The volume has grown manifolds (as can be expected in a fast growing cellular market). Oracle WMS has been able to scale very well to the increase in volume, just as it was designed to do. Here are a couple of videos that highlight the WMS operations at MTN:  1) Video Interview with Margaretha Theart (Warehouse Manager at MTN) 2) Automation Video at MTN (Hat tip: Syed Imran) Enjoy!

    Read the article

  • how do I transfer music to my iphone?

    - by bobdobbs
    I'm using ubuntu 12.04 and I have a first generation iphone,16Gb. The iphone is jailbroken. Under 10.04, I was able to transfer music onto the phone. I used banshee. Under 12.04, ubuntu can see the iphone and it's file system. But can't transfer music. If I just plug the iphone in, rhythymbox can't see the iphone. Banshee can see it sometimes, but can't see it's music, and can't transfer music onto it. Attempts to copy over tracks fail with the error "mp3 format is not supported by the device and no converter was found to convert it" If I plug in the iphone and start nautilus, nautilus can see the iphone, and gives me options for opening it: rhythymbox or a photo management application. If I then open the phone in rhythymbox, I can see the music collection. But when I attempt to copy files over, the syncing process seems to take forever. The only way to end it is to cancel the sync. Afterwards, no new tracks have been added. So, how do I transfer file over to my iphone from ubuntu 12.04?

    Read the article

  • Security vulnerability and nda's [closed]

    - by Chris
    I want to propose a situation and gain insight from the communities thoughts. A customer, call them Customer X has a contract with a vendor, Vendor Y to provide an application and services. Customer X discovers a serious authentication vulnerability in Vendor Y's software. Vendor Y and Customer X has a discussion. Vendor Y acknowledges/confirms flaw. Vendor Y confirms they will put effort to fix. Customer X requests Vendor Y to inform all customers impacted by this. Vendor agrees. Fast forward 2 months, and the flaw has not been fixed. Patches were applied to mitigate but the flaw still exists. However, no customers were informed of issue. At this point customer X contacts Vendor Y to determine the status and understand why customer's were not informed. The vendor nicely reminds the customer they are under an NDA and are still working on the issue. A few questions/discussion pieces out of this. By discussing a software flaw with a vendor, does this imply you have agreed to any type of NDA disclosure? Additionally, what rights as does Customer X have to inform other customers of this vulnerability if vendor does not appear willing to comply? I (the op) am under the impression that when this situation occurs, you are supposed to notify vendor of issue, provide them with ample time to respond and if no response you are able to do what you wish with the information. I am thinking back to the MIT/subway incident where they contacted transit authorities, transit authorities didn't respond in a timely fashion so the students disclosed the information publicly on their own. Few things to note about this: I am not the customer in above situation, also lets assume for purposes of keeping discussion inline that customer X has no intentions of disclosing information, they are merely concerned and interested in making sure other customers are aware until it is fixed so they do not expierence a major security breach. (More information can be supplied if needed to add context to question. )

    Read the article

  • Issue with multiplayer interpolation

    - by Ben Cracknell
    In a fast-paced multiplayer game I'm working on, there is an issue with the interpolation algorithm. You can see it clearly in the image below. Cyan: Local position when a packet is received Red: Position received from packet (goal) Blue: Line from local position to goal when packet is received Black: Local position every frame As you can see, the local position seems to oscillate around the goals instead of moving between them smoothly. Here is the code: // local transform position when the last packet arrived. Will lerp from here to the goal private Vector3 positionAtLastPacket; // location received from last packet private Vector3 goal; // time since the last packet arrived private float currentTime; // estimated time to reach goal (also the expected time of the next packet) private float timeToReachGoal; private void PacketReceived(Vector3 position, float timeBetweenPackets) { positionAtLastPacket = transform.position; goal = position; timeToReachGoal = timeBetweenPackets; currentTime = 0; Debug.DrawRay(transform.position, Vector3.up, Color.cyan, 5); // current local position Debug.DrawLine(transform.position, goal, Color.blue, 5); // path to goal Debug.DrawRay(goal, Vector3.up, Color.red, 5); // received goal position } private void FrameUpdate() { currentTime += Time.deltaTime; float delta = currentTime/timeToReachGoal; transform.position = FreeLerp(positionAtLastPacket, goal, currentTime / timeToReachGoal); // current local position Debug.DrawRay(transform.position, Vector3.up * 0.5f, Color.black, 5); } /// <summary> /// Lerp without being locked to 0-1 /// </summary> Vector3 FreeLerp(Vector3 from, Vector3 to, float t) { return from + (to - from) * t; } Any idea about what's going on?

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • #mvvmlight V4 update for Win8 RTM

    - by Laurent Bugnion
    With Windows 8 RTM out of the doors (at least for some of us), it was also time to create an update to MVVM Light. I selected the V4 RTM to do this (V4.0.23).This RTM version was released a few weeks ago with no much bells and whistles because I was just too busy to write much about it. Now after some vacation, I will resume blogging on all my favorite topics including of course MVVM Light. Upgrade Upgrading the installations should not require an ununistall, so try to simply run the MSI downloaded from the Download section at http://mvvmlight.codeplex.com/. Should you encounter any issue, try to uninstall the old version first following the steps at http://galasoft.ch/mvvm/cleaning/. Upgrading current apps from Win8 RP to Win8 RTM I didn’t recompile the assemblies of MVVM Light, so if you had a version running with V4.0.23 on Windows 8 RP, you should be able to use the same DLLs on Windows 8 RTM. If you were using earlier versions however, I would recommend doing an upgrade. I noticed a warning regarding the signing certificate. It is due to the PFX key which appears to be outdated after the upgrade to Windows 8 RTM. I solved this warning by replacing the old PFX key with a new one I copied from a new project. The warning did not cause the build to fail though. About MVVM Light V4 RTM The RTM is finalizing quite a lot of issues. The change log is available at http://galasoft.ch/mvvm/installing/changes/. There are some issues that are either still open or that popped up since then, and I am working on V4.1 to be released in the next few weeks. In addition to that, I have plans to support Windows Phone 8 (when it comes) and have a nice list of ideas for V5 with a few new components. Thanks again for your continuous support of MVVM Light! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How do I stop someone using my domain for Spam emails?

    - by Vizioz Limited
    Hi Blog Readers,Every now and then I seem to have one of those low life Viagra sellers using my domain for spam emailing people.I have done everything I can think of to try and prevent then from doing this, but they seem to keep doing it. I just wondered if anyone out there new of a way to stop them?The headers from one of the bounce backs look like this:Return-Path: <[email protected]>Received: from rctp.telecomitalia.it (host49-133-dynamic.52-82-r.retail.telecomitalia.it[82.52.133.49]) by mx.google.com with SMTP id o8si307731weq.161.2010.07.23.05.33.53; Fri, 23 Jul 2010 05:33:59 -0700 (PDT)Received-SPF: fail (google.com: domain of [email protected] does not designate 82.52.133.49 as permitted sender) client-ip=82.52.133.49;Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 82.52.133.49 as permitted sender) [email protected]: <[email protected]>Date: Fri, 23 Jul 2010 14:33:52 +0200From: Garnett Mckinnie <[email protected]>MIME-Version: 1.0To: NAME REMOVE <[email protected]>Subject: Where we are well established we areAs you can see from the headers, I have setup the SPF record and it is receiving a "hardfail"We are using Google Apps for our email hosting, so you'd kinda hope that they have got things pretty much secured down, so what I am missing? Or is it always going to be possible for other people to fake sending emails using another domain?

    Read the article

  • Tab Sweep - Upgrade to Java EE 6, Groovy NetBeans, JSR310, JCache interview, OEPE, and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Implementing JSR 310 (New Date/Time API) in Java 8 Is Very Strongly Favored by Developers (java.net) • Upgrading To The Java EE 6 Web Profile (Roger) • NetBeans for Groovy (blogs.oracle.com) • Client Side MOXy JSON Binding Explained (Blaise) • Control CDI Containers in SE and EE (Strub) • Java EE on Google App Engine: CDI to the Rescue - Aleš Justin (jaxenter) • The Java EE 6 Example - Testing Galleria - Part 4 (Markus) • Why is OpenWebBeans so fast? (Strub) • Welcome to the new Oracle Enterprise Pack for Eclipse Blog (blogs.oracle.com) • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API (Spotlight Podcast) • Glassfish cluster installation and administration on top of SSH + public key (Paulo) • Jfokus 2012 on Parleys.com (Parleys) • Java Tuning in a Nutshell - Part 1 (Rupesh) • New Features in Fork/Join from Java Concurrency Master, Doug Lea (DZone) • A Java7 Grammar for VisualLangLab (Sanjay) • Glassfish version 3.1.2: Secure Admin must be enabled to access the DAS remotely (Charlee) • Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6

    Read the article

  • The Splendiferous Array of Culinary Tools [Infographic]

    - by ETC
    If your geeking out extends from the workbench to the kitchen counter, you’ll love this swanky infographic detailing the families of utensils in your kitchen drawers and cupboards. The poster showcases everything from scissors to strainers in a retro-style poster. If you can find a culinary tool in your kitchen that isn’t on the chart then you’re obviously a culinary wizard of the highest order. You can hit up the link below to check out the poster in full-size and downloadable glory or head over to the design company that created it here (and pre-order a printed copy for your kitchen). A Complete Guide to Your Kitchen Tools [Fast Co. Design via Design Sponge] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • HTML5 game engine for a 2D or 2.5D RPG style "map walk"

    - by stargazer
    please help me to choose a HTML5 game engine or Javascript libraries I want to do the following in the game: when the game starts a part the huge map (full size of the map: about 7 screens) is shown. The map itself is completely designed in the editor mapeditor.org (or in some comparable editor - if you know a good alternative to mapeditor.org - let me know) and loaded at runtime or at design time. The game engine should support loading of isometric maps (well, in worst case only orthogonal maps will be sufficient) both "tile layer" and "object layer" from mapeditor.org should be supported. Scrolling/performance of this map should be fast enough. The map and the game should be either in 2D (orthogonal map) or in 2.5D (isometric map) The game engine should support movement of sprites with animation. Let say I have a sprite for "human" with animation sequences showing "walking" in 8 directions - it should be imported into game engine and should "walk" on the map without writing a lot of Javascript code. Automatic scrolling of the map the "human" nears the screen border. Collision detection, "solid" objects. The mapeditor.org supports properies on tiles. Let say I assign a "solid" property to some tiles in editor. It should be easy to check this "solid" property in the game engine and implement kind of "solid" behavior, so the animanted sprites do not walk through the walls. Collision detection - it should be easy to implement some custom functionality like "when sprite A is close to sprite B - call this function" Showing "dialogs" or popup windows on top of the map - should be easy to implement. Cross-browser audio support - (it is implemented quite well in construct 2 from scirra, so I'm looking for the comparable audio quality) The game itself is a king of RPG but without fighting scenes and without huge "inventory". The main character just walking on the map, discovers some things, there are dialogs and sounds. The functionality of this example from sprite.js http://batiste.dosimple.ch/sprite.js/tests/mapeditor/map_reader.html is very close to what I'm developing. But I'm not a Javascript guru (and a very lazy guy) and would like to write even less Javascript code as in the example...

    Read the article

  • Netretail's online retail operation benefits from personal contact

    - by christopher.jones
    Hot on oracle.com is a snapshot of Netretail Holding B.V. profiling their use of PHP and Oracle technology such as Oracle RAC cluster database to become a leading online retailer across Central and Eastern Europe. We've also just refreshed our key PHP Scalability and High Availability whitepaper which talks about connection pooling (DRCP) and Fast Application Notification (FAN). We brought it up to date for 11gR2 and PHP 5.3. It now includes the new 11gR2 V$CPOOL_CONN_INFO view, the new columns for DBA_CPOOL_INFO, information about LOGOFF triggers, and information about the support for Client Result Caching with DRCP. Back to Netretail. Two of their secrets to success are keeping technically up to date, and networking. That is, networking in the business sense. I had the pleasure of meeting Michal Táborský (@whizz), the Chief System Architect, when he was in California for a Velocity conference. Michal took time to visit Oracle HQ and talk with our developers about his then current architecture and future needs. I also met his manager at last year's Oracle OpenWorld conference. Having built up a relationship with us, Netretail now has access to Oracle Development staff. While this will never bypass Oracle Support (which have tools, systems etc that are needed and useful for resolving issues), it makes communication easier for some classes of questions. It helps discussions that will let us improve Oracle products, and make Netretail stronger. I like this. And there's no reason why you can't talk with us too. You know where to email me.

    Read the article

  • SD Card Reader not working in Ubuntu 12.04

    - by tripkane
    I have a read many other posts on this issue and believe that Ubuntu 12.04 is not even recognizing my SD Card Reader as just that: Computer Model: Metabox (Australian builder of Clevo laptops) / Clevo P150EM OS: Ubuntu 12.04 (64 Bit) CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz HD: 120GB Intel 550/520MB/s SSD According to the people who built my computer, the specs of the SD Card reader in my comp are as follows: Manufacture: Realtek Semiconduct Corp. Location: PCI bus 3 Hardware ID: PCI\Ven_10EC&DEV_5289&SUBSYS_51051558 Physical device object name: \Device\NTPNP_PCI0015 Here are the relevant outputs of the following commands run from the terminal: sudo lshw *-generic UNCLAIMED description: Unassigned class product: Realtek Semiconductor Co., Ltd. vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list configuration: latency=0 resources: memory:f6a00000-f6a0ffff sudo lspci -v -nn 03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device [10ec:5289] (rev 01) Subsystem: CLEVO/KAPOK Computer Device [1558:5105] Flags: bus master, fast devsel, latency 0, IRQ 4 Memory at f6a00000 (32-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 00 Capabilities: [b0] MSI-X: Enable- Count=1 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Does the unassigned details of these outputs mean that Ubunutu desn't know that the SD Card Reader is one and what do with it? and if so how should I go about fixing it?? Cheers ;)

    Read the article

  • Wireless not detected on an hp Pavilion g6 - RT5390PCIe

    - by Earl Larson
    I just bought an HP Pavilion g6 laptop and I installed Natty Narwhal alongside Windows 7. In Windows 7, my WiFi works perfect but in Ubuntu it absolutely (no matter what I do) won't detect my home WiFi network. Here is the output of "lspci": earl@ubuntu:~$ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS880 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 0) 00:05.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 1) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:11.0 SATA controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 42) 00:14.1 IDE interface: ATI Technologies Inc SB7x0/SB8x0/SB9x0 IDE Controller (rev 40) 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:16.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc M880G [Mobility Radeon HD 4200] 01:05.1 Audio device: ATI Technologies Inc RS880 Audio Device [Radeon HD 4200] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05) 03:00.0 Network controller: Ralink corp. Device 5390 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5209 (rev 01) earl@ubuntu:~$

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • How to find location of installed library

    - by Raven
    Background: I'm trying to build my program but first I need to set up libraries in netbeans. My project is using GLU and therefore I installed libglu-dev. I didn't note location where the libraries were located and now I can't find them.. I've switched to Linux just a few days ago and so far I'm very content with it, however I couldn't google this one out and becoming frustrated.. Is there way to find out where files of package were installed without running installation again? I mean if I got library xxx and installed it some time ago, is there somecommand xxx that will print this info? I've already tried locate, find and whereis commands but either I'm missing something or I just can't do it correctly.. for libglu, locate returns: /usr/share/bug/libglu1-mesa /usr/share/bug/libglu1-mesa/control /usr/share/bug/libglu1-mesa/script /usr/share/doc/libglu1-mesa /usr/share/doc/libglu1-mesa/changelog.Debian.gz /usr/share/doc/libglu1-mesa/copyright /usr/share/lintian/overrides/libglu1-mesa /var/lib/dpkg/info/libglu1-mesa:i386.list /var/lib/dpkg/info/libglu1-mesa:i386.md5sums /var/lib/dpkg/info/libglu1-mesa:i386.postinst /var/lib/dpkg/info/libglu1-mesa:i386.postrm /var/lib/dpkg/info/libglu1-mesa:i386.shlibs Other two commands fail to find anything. Now locate did it's job but I'm sure none of those paths is where the library actually resides (at least everything I was linking so far was in /usr/lib or usr/local/lib). The libglu was introduced just as example, I'm looking for general solution for this problem.

    Read the article

  • Stereo images rectification and disparity: which algorithms?

    - by alessandro.francesconi
    I'm trying to figure out what are currently the two most efficent algorithms that permit, starting from a L/R pair of stereo images created using a traditional camera (so affected by some epipolar lines misalignment), to produce a pair of adjusted images plus their depth information by looking at their disparity. Actually I've found lots of papers about these two methods, like: "Computing Rectifying Homographies for Stereo Vision" (Zhang - seems one of the best for rectification only) "Three-step image recti?cation" (Monasse) "Rectification and Disparity" (slideshow by Navab) "A fast area-based stereo matching algorithm" (Di Stefano - seems a bit inaccurate) "Computing Visual Correspondence with Occlusions via Graph Cuts" (Kolmogorov - this one produces a very good disparity map, with also occlusion informations, but is it efficient?) "Dense Disparity Map Estimation Respecting Image Discontinuities" (Alvarez - toooo long for a first review) Anyone could please give me some advices for orienting into this wide topic? What kind of algorithm/method should I treat first, considering that I'll work on a very simple input: a pair of left and right images and nothing else, no more information (some papers are based on additional, pre-taken, calibration infos)? Speaking about working implementations, the only interesting results I've seen so far belongs to this piece of software, but only for automatic rectification, not disparity: http://stereo.jpn.org/eng/stphmkr/index.html I tried the "auto-adjustment" feature and seems really effective. Too bad there is no source code...

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >