Search Results

Search found 9611 results on 385 pages for 'cool hand luke uk'.

Page 239/385 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Quickly Copy Movie Files to Individually Named Folders

    - by DigitalGeekery
    Some HTPC media manager applications require movie files to be in stored in separate folders to properly store information such as cover art images and other metadata. Here we look at copying movie files to individual folders. If you already have a large movie collection stored in a single folder, we’ll show you how to quickly move those files into their own individually named folders. File2Folder FIle2folder is a handy portable app that automatically creates and moves movie files into a folder of the same filename. There is no installation needed. Simply download and run the .exe file (link below). Enter the current movie directory, or browse for the folder. File2folder now supports both local and network shares. When you are ready to create the folders and move the files, click Move! You’ll see the move progress displayed in the window. When the process is finished, you’ll have all your movie file in individual folders.   Change your mind? Just click the Undo! button…   …and the move and folder creation process will be undone. If you would like to have the folder monitored for new files, click the Start button. File2folder will process any new files it discovers every 180 seconds. To turn it off, click Stop. This simple little program is a huge timesaver for those looking to organize movie collections for their HTPC. We should also note that this will work with any files, not just videos. Download file2folder Similar Articles Productive Geek Tips Hack: Turn Off Debug Mode in VMWare Workstation 6 BetaAdd Images and Metadata to Windows 7 Media Center Movie LibraryAdd Folders to the Movie Library in Windows 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media CenterMove the Public Folder in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • Partner BI Applications 4-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 12th - 15th February 2012, Oracle Reading (UK) - REGISTER NOW This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics: Overview of Oracle Business Intelligence Applications Oracle BI Applications Fundamentals and Features Configuring BI Applications for Oracle E-Business Suite Understanding BI Applications Architecture Fundamentals of BI Applications Security Prerequisites - This training is only for OPN member Partners. Good understanding of basic data warehousing concepts Hands on experience in Oracle Business Intelligence Enterprise Edition Hands on experience in Informatica Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables Some understanding of  Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM)  Please note that attendees are required to bring a laptop. Laptop 4GB RAM-Recognized by Windows 64 bits 80GB free space in Hard drive or External Device CPU Core 2 Duo or Higher Operating System Requirements Windows 7, Windows XP, Windows 2003 NOT ALLOWED with Windows Vista An Administrator User

    Read the article

  • Oracle ENDECA Discovery 3.1 Partner Training 3-Day Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 To find out more about the ENDECA training, and to Register for this, click here. June 24-26, 2014: Oracle Reading, UK – Free to partners in EMEA. FREE of charge to OPN member Partners, this Oracle Endeca Information Discovery (OEID) 3-day bootcamp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. This workshop will provide hands-on experience with Oracle Endeca Information Discovery. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. You will also learn about working with ETL components for content acquisitions and other aspects of the project such as security. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Endeca Information Discovery solution. If you are a Bigdata Analytics Architect or Developer, BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Click here to Register for this. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Ubuntu 12.04 x64 LTS VPN Server not changing IP

    - by user288778
    I used this guide http://silverlinux.blogspot.co.uk/2012/05/how-to-pptp-vpn-on-ubuntu-1204-pptpd.html and it worked fine. I'm able to connect but the problem is, that my IP being changed to "localip" not "remote ip". This is what I get from tail -f /var/log/syslog [code] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced (see http://bugs.launchpad.net/bugs/191889) June 6 00:09:19 instant5860 NetworkManager[1456]: Marking connection 'Wired connection 1' invalid. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) failed. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) complete. June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): device state change: failed - disconnected (reason 'none') [120 30 0] June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): deactivating device (reason 'none') [0] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced. June----- avahi-daemon[440]: Withdrawing address record for fe80......... on eth1 Jun------avahi-daemon[440]: Leaving mDNS multicast group on interface eth1. IPv6 with address fe80..... Jun------avahi-daemon[440]: Interface eth1.IPv6 no longer relevant for mDNS. Jun------avahi-daemon[440]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80.... Jun------avahi-daemon[440]: New relevant interface eth1.IPv6 for mDNS Jun------avahi-daemon[440]: Registering new address record for fe80..... on eth1.*. Jun - snmpd[1172]: error on subcontainer 'ia_addr' insert (-1) dbusp382]: [syste] Activating service name='org.freedesktop.PackageKit' (using servicehelper) AptDaemon: INFO: Initializing daemon AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer dbus[382]: [system] Successfu;;y activated service 'org.freedesktop.PackageKit' AptDaemon.PackageKit: INFO: Initializing PackageKit transaction AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/233beca013a0473ea34d9dea805af5df AptDaemon.Worker: INFO: Processing transaction /org/debian/apt... AptDaemon.PackageKit: INFO: Get updates() AptDaemon.Worker: INFO: Finished snmpd[1172]: error on subcontainer pptpd[23611]: CTRL: Client 82.33.... control connection started pptpd[23611]: CTRL: Starting call (launching pppd, opening GRE) pptpd[23611]: pppd 2.4.5 started by root uid 0 pptpd[23611]: Using interface ppp0 pptpd[23611]: Connect ppp0 <-- /dev/pts/1 NetworkManager[1456]: SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) NetworkManager[1456]:SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. pptpd[23612]: peer from calling number 82... authorized. kernel: [2918261.416923] init: ufw pre-start process (23613) terminated with status 1 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 7 CTRL: Ignored a SET LING info packet with real ACCMs! local IP address:109.0.121.197 remote IP address: 109.0.84.56 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 13 NetworkManager[1456]: (eth1): DHCPv4 request timed out. NetworkManager[1456]: (eth1): canceled DHCP transaction, DHCP client pid 23280 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) scheduled... NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) started... NetworkManager[1456]: (eth1): device state change: ip-config - failed (reason 'ip-config-unavailable') [70 120 5[ NetworkManager[1456]: Unmanaged 'ia_addr' insert (-1)[/code]

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Brain Teaser: How Did I Do This (Part 1: The Solution)

    - by Geertjan
    In Part 1: The Challenge, published this time last week, I introduced a "brain teaser". The brain teaser asks you to figure out how to allow images and other files to be meaningfully dropped onto a NetBeans Platform application, i.e., on the drop something useful should happen with the dropped file: if the file is an image, the image should open in the IDE; if the file is a PDF document, the PDF viewer should open externally; if the file is a text file, it should open as a text in the IDE, etc. Solution. And here is the solution: http://bits.netbeans.org/dev/javadoc/org-openide-windows/org/openide/windows/ExternalDropHandler.html When an implementation of the "ExternalDropHandler" class is available in the global Lookup, and an object is being dragged over some part of the main window, the window system may call the methods of this class to decide whether it can accept or reject the drag operation. And when the object is actually dropped, this class will be asked to handle the drop. OK, so go ahead and implement the above class and put it into the Lookup. Or... guess what? The NetBeans Platform has a default implementation of the above class, appropriately named "DefaultExternalDropHandler". Not only is this useful to learn about how to implement the ExternalDropHandler class (i.e., by reading the source here): you can simply include the module that contains this class in your own NetBeans Platform application and then your application will be able to receive external drag/drop events and do something meaningful with them thanks to the DefaultExternalDropHandler. Do this: Open your NetBeans Platform application in NetBeans IDE. Right-click the application in the Projects window and choose Properties. In the Libraries tab, expand the "ide" cluster, and select "User Utilities". (That's where "DefaultExternalDropHandler.java" is found and registered in the Lookup.) Now click the "Resolve" button, if it appears, because some additional related modules need to now be included, if they haven't been included yet. Again in the "ide" cluster in the Libraries tab, select "Image". That's the Image Editor. Click OK. Run the application. Drag an image or some other type of file into your application, from outside the application, and you'll see the application tries to handle the drop. If the file being dragged is an image, it will open in the Image Editor, which you included in the previous step of these instructions. Hurray, you're done. Without any programming at all, you've added a cool new feature to your application.

    Read the article

  • Oredev 2012: Summary and source code

    - by Laurent Bugnion
    This week, I had the pleasure to be invited to talk at Oredev, a really cool conference taking place in Malmo, Sweden. The whole event is awesome, including a very special dinner on Monday including sauna and swimming in a 6 degrees cold Baltic sea, and a reception with dinner at the town hall, including the mayor himself. Considering Malmo is a town of 300'000 inhabitants, it is a pretty nice occasion and the historical building itself is really worth seeing. For those interested, I placed my pictures on my Flickr account. I had a workshop on Tuesday morning about Windows 8 development with XAML/C#, and then a session on Wednesday about MVVM in Windows Phone 8 and Windows 8, of course using MVVM Light. I was very nervous because I reworked some of my demos as recently as this morning, in the wake of the Build conference last week and the release of both the Windows Phone SDK and MVVM Light V4.1. Everything went well however, and if I judge by the people I talked t after the talk, and Twitter, everything went pretty well. Before my talk on Tuesday, I had the pleasure to see a talk by Iris Classon (@irisclasson) on the challenges of being a "n00b" and a woman in software development. I especially appreciated her research and conclusions on the lack of women I our industry, a topic that is dear to my heart (because I want the best possible future for my two daughters, and also because I really enjoy working with women on projects, and getting a different insight on the art of software development. I really want to thank the excellent organization committee for their hard work and their fantastic welcome to Malmo. In particular Emily Holweck did a wonderful job and was super helpful throughout the preparation and the conference itself. I made a few pictures during my stay, all with the new Nokia Lumia 920, and hope you will enjoy them too. The source code and the slides… The source code is available for download from Skydrive. You will find the following: Windows 8 workshop slides. MVVM Applied slides Source code package with Win8Demo: The demo I built during the 4 hours workshop, with some light MVVM, web services (JSON), GridView, Design time data (Blend / Visual Studio designer), Bing maps integration, location sensor, Search pane integration. SemanticZoomSample: a sample I put together to demonstrate the SemanticZoom control, with two GridViews and of course full design time data for Blend work. Due to time constraints, I was not able to show this demo during the workshop, but I publish it anyway, hoping it will be useful to someone. PictureUploader: The demo I built during my 50 minutes session about MVVM Applied in Windows Phone 8 and Windows 8. Code sharing, design time data, MVVM Light are used in Windows Phone 8 and Windows 8 apps. And in video… You can also see the video of my MVVM talk thanks to the good services of the Oredev team! MVVM Applied in Windows Phone and Windows 8 from Øredev Conference on Vimeo.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • Exalytics OBI11g Partner Training 3-day hands-on Workshops

    - by Mike.Hallett(at)Oracle-BI&EPM
    These FREE to OPN Partners hands-on workshops highlight both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle's proven Business Intelligence Foundation (OBI 11g v 11.1.1.6 and Essbase) with enhanced visualization capabilities and performance optimizations. Priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam, and to Partners who have purchased an Exalytics for their own data centres to demonstrate it to their clients. Topics covered will include: Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution.You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics. Prerequisites Experience and understanding of OBIEE 11g is required ·       Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11g Introduction Workshop is highly recommended, and priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam. Good understanding of data warehousing and data modelling for reporting and analysis purpose.  Strong experience with database technologies preferred Attendee to provide their own laptops which must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free disk space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop.  Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Register Here for 3-day Workshops: 11-Dec-12 Birmingham UK 29-Jan-13 Utrecht NL 12-Feb-13 Frankfurt Germany 12-Mar-13 Moscow Russia

    Read the article

  • In Technology, Ignorance is NOT Bliss

    - by Tanu Sood
    Author: Debra Lilley, ACE Director, UK Proof I’m not technical -  I’ve just finished a Latin America tour with OTN and a funny thing happened that I want to share with you; because it is quite a good analogy for how many of us use technology today and you know how I love analogies. In Costa Rica we had a really long journey up through the mountains to where our conference was to be. The road was windy and narrow and once it got dark there was no scenery to see, boredom set in. At one stage I looked at my watch to see the time, but in the dark I couldn’t make it out, so I thought I would be clever and use the torch in my smartphone! Even though as soon as I switched on the phone it showed the time, I ignored it and used the torch to read my watch. That’s us when we pay maintenance on software, ask for enhancements, and either chose not to upgrade or as I have seen so many times, upgrade but don’t use the new features. I know there are always other factors not least the upgrade costs themselves but in the later releases of all the Oracle family of applications Oracle have done a lot to make the interoperability of them with Oracle Fusion Middleware more successful and in many cases for the first time. My heritage is Oracle E Business Suite (EBS) and the availability of Oracle Weblogic for EBS is fantastic for an Oracle powered organisation that can move away from supporting multiple flavours of application server. The same release made available  - the no downtime patching that Oracle Database 11g introduced with Edition Based Redefinition. I am not saying you must use these features but you must be aware of what each release of your application brings and make a business based decision as to whether it is for you or not. I like to have a simple spreadsheet of features with no-value, nice-to-have, must-have ratings, but make the spreadsheet cumulative so that when you do upgrade you have all the features listed you previously didn’t take up. That way you can avoid the ‘using your phone to read your watch’ scenario. About the Author: Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts  

    Read the article

  • Legal concerns with orchestrating a music submission contest

    - by Amplify91
    My team and I are getting pretty far along in the development of our latest game and have been thinking about audio. We decided to host an audio submission contest where we will offer a little cash and some equity stake in the game as prizes. We are also giving away copies of the game to participants. We hope not only to find audio for our game, but to meet some cool sound artists and promote the game a bit through the process. First of all, is this even a good idea? What are some potential dangers in doing this? Will it even be well received among artists? Secondly, I wrote up some Terms and Conditions in my best legal-speak to try to protect us and clarify how the contest will be run. Are these sufficient to make sure everyone involved is treated fairly and is legally protected? They are as follows: All submissions (The Submission) must be licensed under a Creative Commons Attribution 3.0 Unported License (CC-BY-3.0) By applying a CC-BY-3.0 license, you (The Submitter) expressly give Detour Games (and all members wherein) permission to copy, distribute, transmit, modify, adapt, and make commercial use of The Submission. The Submitter must own all rights to The Submission and be within their rights to license it as specified and submit it. The Submitter claims responsibility for the legality of The Submission. If The Submission is found to infringe on the rights of a person or entity other than those of The Submitter, Detour Games will not be held liable as all responsibility and liability for the legality of The Submission is that of The Submitter's. No more than two free copies of The Game per submitter. All flat cash prizes will only be disbursed pending the success of our first $5,000 Kickstarter campaign. These prizes will be disbursed 30 days after Detour Games receives the Kickstarter funds. All equity prizes (percentage of profits) are defined as the given percent of total profits after costs for a period of one year (12 months) after the release of RAW. These prizes will be disbursed semi-annually. All prize money will be disbursed through either an electronic fund transfer through a service such as PayPal or by a mailed money order. It is The Submitter's responsibility to cooperate with Detour Games in the disbursement of the funds. Detour Games reserves the right to change these Terms and Conditions at any time without notice. By participating in the contest, The Submitter agrees to and accepts all terms and conditions listed. What else could I do (legally) to protect everyone involved?

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Best system for creating a 2d racing track

    - by tesselode
    I am working a 2D racing game and I'm trying to figure out what is the best way to define the track. At the very least, I need to be able to create a closed circuit with any amount of turns at any angle, and I need vehicles to collide with the edges of the track. I also want the following things to be true if possible (but they are optional): The code is simple and free of funky workarounds and extras I can define all of the parts of the track (such as turns) relative to the previous parts I can predict the exact position of the road at a certain point (that way I can easily and cleanly make closed circuits) Here are my options: Use a set of points. This is my current system. I have a set of turns and width changes that the track is supposed to make over time. I have a point which I transform according to these instructions, and I place a point every 5 steps or so, depending on how precise I want the track to be. These points make up the track. The main problem with this is the discrepancy between the collisions and the way the track is drawn. I won't get into too much detail, but the picture below shows what is happening (although it is exaggerated a bit). The blue lines are what is drawn, the red lines are what the vehicle collides with. I could work around this, but I'd rather avoid funky workaround code. Beizer curves. These seem cool, but my first impression of them is that they'll be a little daunting to learn and are probably too complicated for my needs. Some other kind of curve? I have heard of some other kinds of curves; maybe those are more applicable. Use Box2D or another physics engine. Instead of defining the center of the track, I could use a physics engine to define shapes that make up the road. The downside to this, however, is that I have to put in a little more work to place the checkpoints. Something completely different. Basically, what is the simplest system for generating a race track that would allow me to create closed circuits cleanly, handle collisions, and not have a ton of weird code?

    Read the article

  • Play Framework Plugin for NetBeans IDE (Part 2)

    - by Geertjan
    After I published part 1 of this series, the first external contribution (i.e., not by me) to the NetBeans plugin for Play Framework 2 was committed today. Yann D'Isanto added support for creating new Play projects: That completely solves a problem I was working on, in a different way altogether. I was working on creating a new wizard that would call "play new" on the command line and pass into the command line the entered name and application type (1 for Java and 2 for Scala). However, Yann's solution is better, at least in the sense in that it works, as opposed to mine which didn't, because of problems I continually had with the command line, since one needs to press Enter multiple times on the Play command line when creating new projects, which I wasn't able to simulate in my new wizard. Yann's approach is simply to follow the approach taken in the Project Type Module Tutorial, which explains how to register a project sample in the IDE. I was inspired by Yann's contribution, especially when he mentioned that one needs to build Play projects on the command line. So, I added a new menu item on the right-click of a project for building Play projects, which simply passes "play compile" to the command line for the current project: Via the IDE's main menu bar, you can also Build and Run the application, though the code for the Clean function needs to be added still, which would be a cool thing for anyone out there to add, by using all the existing code and then passing "play clean compile" to the command line. Something else that Yann added is an Options Window extension, thanks to the Options Window Module Tutorial, for registering the Play installation, which is a step forward from my hard coded solution. I changed things slightly so that, when Build or Run are selected, without a Play installation being defined, the Options window opens, displaying the tab that Yann created, shown below. Notice that there's no Browse button, which would be a simple next step for anyone else to contribute. A small tip is to use the FileChooserBuilder from the NetBeans IDE APIs when working on the Browse button: Looking forward to more contributions to the Play Framework 2 plugin for NetBeans IDE. Just leave a message here with your ideas, with your java.net name, and then I'll add you to the project on java.net, where I very much look forward to your contributions: http://java.net/projects/nbplay/sources/nbplay

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • Boot error aftter clean Ubuntu 13.04 install: [Reboot and select proper boot device]

    - by IcarusNM
    I am having the same problem as this guy where a fresh Ubuntu install completes beautifully but will not boot. I get the ASUS (?) "Reboot and select proper boot device" error, first with Xubuntu 13.10 and after finally giving up there, and Xubuntu 13.4, I am back to regular Ubuntu 13.4. ASUS motherboard Z77, Intel chipset. Standard internal SATA 500GB HD. 64-bit. All-new hardware less than 3 months old. It was running Ubuntu 12.04LTC great until I tried this upgrade. I have re-installed from scratch every which way: with LVM, without LVM; with the default partitions, with my own partitions. With ext3 or ext4. Alongside; replace; upgrade. No difference. On the last two tries, I have booted afterward from the same USB stick, downloaded and run boot-repair, and now I guess I am off to the boot-repair support email with my URLs from that. It did all kinds of cool stuff but ultimately made no difference. I never got anything like this with Ubuntu 12.04. I've now probably re-installed Ubuntu 13.04 ten times slightly different ways. I finally found how to skip the language packs, so at least that sped things up! :) This starts from the ubuntu-13.04-desktop-amd64.iso and UNetbootin as suggested on the official instructions for USB thumb drive creation from OSX. That part all works fine (booting the USB on the PC and trying Ubuntu and/or installing from there on the PC HD.) I have no CD drive on this PC, but I suppose I could get one. I would rather find some Linux install that works from USB like I've always done. After running boot-repair twice, in the ASUS BIOS I now see three different UEFI boot options in the priority list, and they are all labeled exactly the same: ubuntu (P6: WDC WD5000AAKX-00U6AA0) Then there's a non-UEFI option: P6: WDC WD5000AAKX-00U6AA0 (476940MB) And a fifth option appeared after the first boot-repair: Windows Boot Manager (P6: WDC WD5000AAKX-00U6AA0) I have tried all 5 of these, and I get exactly the same error. I have never had Windows installed on this HD. ASUS is calling it Windows Boot Manaer but I presume that's a mistaken label for whatever boot-repair did. I can boot on USB and run GParted and it looks great. The partitions all look normal. I found another case of this online with no solution posted. I can't find much about it online. Needs a Master Boot Record wipe/redo?? I'm not sure how.

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • easiest and best way to make a server queue java

    - by houlahan
    i have a server at the moment which makes a new thread for every user connected but after about 6 people are on the server for more than 15 mins it tends to flop and give me java heap out of memory error i have 1 thread that checks with a mysql database every 30 seconds to see if any of the users currently logged on have any new messages. what would be the easiest way to implement a server queue? this is the my main method for my server: public class Server { public static int MaxUsers = 1000; //public static PrintStream[] sessions = new PrintStream[MaxUsers]; public static ObjectOutputStream[] sessions = new ObjectOutputStream[MaxUsers]; public static ObjectInputStream[] ois = new ObjectInputStream[MaxUsers]; private static int port = 6283; public static Connection conn; static Toolkit toolkit; static Timer timer; public static void main(String[] args) { try { conn = (Connection) Mysql.getConnection(); } catch (Exception ex) { Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } System.out.println("****************************************************"); System.out.println("* *"); System.out.println("* Cloud Server *"); System.out.println("* ©2010 *"); System.out.println("* *"); System.out.println("* Luke Houlahan *"); System.out.println("* *"); System.out.println("* Server Online *"); System.out.println("* Listening On Port " + port + " *"); System.out.println("* *"); System.out.println("****************************************************"); System.out.println(""); mailChecker(); try { int i; ServerSocket s = new ServerSocket(port); for (i = 0; i < MaxUsers; ++i) { sessions[i] = null; } while (true) { try { Socket incoming = s.accept(); boolean found = false; int numusers = 0; int usernum = -1; synchronized (sessions) { for (i = 0; i < MaxUsers; ++i) { if (sessions[i] == null) { if (!found) { sessions[i] = new ObjectOutputStream(incoming.getOutputStream()); ois[i]= new ObjectInputStream(incoming.getInputStream()); new SocketHandler(incoming, i).start(); found = true; usernum = i; } } else { numusers++; } } if (!found) { ObjectOutputStream temp = new ObjectOutputStream(incoming.getOutputStream()); Person tempperson = new Person(); tempperson.setFlagField(100); temp.writeObject(tempperson); temp.flush(); temp = null; tempperson = null; incoming.close(); } else { } } } catch (IOException ex) { System.out.println(1); Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } } } catch (IOException ex) { System.out.println(2); Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } } public static void mailChecker() { toolkit = Toolkit.getDefaultToolkit(); timer = new Timer(); timer.schedule(new mailCheck(), 0, 10 * 1000); } }

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • How to resolve Windows Update Error 8024402F on Windows 7 Home Premium 64bit?

    - by Day
    I have been having the same problem with Windows Updates on 2 of my machines at home, both running Windows 7 Home Premium 64-bit. One of the 2 machines is a brand new install, the other has run Windows Update in the past, but is also not working now. When I manually check for updates using the Control Panel, I get error code 8024402F: I followed the link to "Get help with this error", which brings up several articles in Windows Help and Support, none of which are for this specific error code. From the help and general googling I've tried: Checking internet connectivity. Most of the help suggests that this error is caused by a general internet connectivity problem. But if you're reading this, my connection is definitely working fine. Disabling antivirus temporarily and trying to run Windows Update. This didn't help (I run AVG free) Running Control Panel - Troubleshooting - Security Systems - Fix Problems with Windows Update. This said it detected and resolved problems, but didn't help. Update using IE (as I used to in XP). Go to http://windowsupdate.microsoft.com/ redirects me to http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx for which IE displays a "connection problem" (i.e. site unreachable) I've had the same problem for 24 hours now, so surely the Windows Update servers haven't been down this whole time? A quick check on twitter shows no worldwide outcry about Windows Update being unavailable, so is it just me? I'm based in the UK, but I notice that the http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx URL is also unavailable using ''wget'' from my webserver in Chicago. day@ord1:~$ wget http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx --2011-03-17 00:01:27-- http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx Resolving test.update.microsoft.com... failed: Name or service not known. wget: unable to resolve host address `test.update.microsoft.com' day@ord1:~$ host test.update.microsoft.com Host test.update.microsoft.com not found: 3(NXDOMAIN)

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >