Search Results

Search found 9233 results on 370 pages for 'building'.

Page 94/370 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • Java Spotlight Episode 76: Pro Java FX2 - A Definative Guide to Rich Clients with Java Technology

    - by Roger Brinkley
    Tweet An interview with the authors of Pro Java FX2: A Definative Guide to Rich Clients with Java Technology. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Angela Caicedo has created 3 new Java FX screen cast videos on java UTube channel: Part 1: Building your First Java FX Application with Netbeans 7.1, Part 2: Building your First Java FX Application with Netbeans 7.1, and Getting Started with Scene Builder.  Events March 26-29, EclipseCon, Reston, USA March 27, Virtual Developer Days - Java (Asia Pacific (English)),9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT  / 3.00pm - 7.30pm AEDT April 4-5, JavaOne Japan, Tokyo, Japan April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India Feature InterviewPro JavaFX 2: A Definitive Guide to Rich Clients with Java Technology is available from Amazon.com in either paperback or on the Kindle.James L. (Jim) Weaver is a Java and JavaFX developer, author, and speaker with a passion for helping rich-client Java and JavaFX become preferred technologies for new application development. Books that Jim has authored include Inside Java, Beginning J2EE, and Pro JavaFX Platform, with the latter being updated to cover JavaFX 2.0. His professional background includes 15 years as a systems architect at EDS, and the same number of years as an independent developer. Jim is an international speaker at software technology conferences, including the JavaOne conferences in San Francisco and São Paulo. Jim blogs at http://javafxpert.com, tweets @javafxpert. Weiqi Gao is a principal software engineer with Object Computing, Inc., in St. Louis, MO. He has more than 18 years of software development experience and has been using Java technology since 1998. He is interested in programming languages, object-oriented systems, distributed computing, and graphical user interfaces. He is a presenter and a member of the steering committee of the St. Louis Java Users Group. Weiqi holds a PhD in mathematics. Stephen Chin is chief agile methodologist at GXS and a technical expert in client UI technologies. He is lead author on the Pro Android Flash title and coauthored the Pro JavaFX Platform title, which is the leading technical reference for JavaFX. In addition, Stephen runs the very successful Silicon Valley JavaFX User Group, which has hundreds of members and tens of thousands of online viewers. Finally, he is a Java Champion, chair of the OSCON Java conference, and an internationally recognized speaker featured at Devoxx, Codemash, AnDevCon, Jazoon, and JavaOne, where he received a Rock Star Award. Stephen can be followed on twitter @steveonjava and reached via his blog: http://steveonjava.com.Dean Iverson has been writing software professionally for more than 15 years. He is employed by the Virginia Tech Transportation Institute, where he is a rich client application developer. He also has a small software consultancy called Pleasing Software Solutions, which he cofounded with his wife. Johan Vos started to work with Java in 1995. As part of the Blackdown team, he helped port Java to Linux. With LodgON, the company he cofounded, he has been mainly working on Java-based solutions for social networking software. Because he can't make a choice between embedded development and enterprise development, his main focus is on end-to-end Java, combining the strengths of backend systems and embedded devices. His favorite technologies are currently Java EE/Glassfish at the backend and JavaFX at the frontend. Johan's blog can be followed at http://blogs.lodgon.com/johan, he tweets at http://twitter.com/johanvos. Mail Bag What’s Cool Gerrit Grunwald's SteelSeries FX Experience Tools Canned Animations ComboBox

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Start fail VMware-Player 3.1.5 on Ubuntu 12.04 32bit

    - by Chris Lee
    Install got no problem but at start of VMWare Player it want to compile and exit with this error: "Unable to build kernel module." Thats the log: Jun 05 00:03:36.153: app-3075712704| Log for VMware Workstation pid=3709 version=7.1.5 build=build-491717 option=Release Jun 05 00:03:36.153: app-3075712704| The process is 32-bit. Jun 05 00:03:36.153: app-3075712704| Host codepage=UTF-8 encoding=UTF-8 Jun 05 00:03:36.153: app-3075712704| Logging to /tmp/vmware-root/setup-3709.log Jun 05 00:03:36.284: app-3075712704| modconf query interface initialized Jun 05 00:03:36.284: app-3075712704| modconf library initialized Jun 05 00:03:36.336: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.350: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.371: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.417: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.429: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.521: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.526: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.529: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.533: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.536: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.579: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.583: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.586: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.589: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.593: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:36.602: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:36.624: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:39.677: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.689: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.694: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.697: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.701: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.711: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:39.733: app-3075712704| Your GCC version: 4.6 Jun 05 00:03:39.854: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.858: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.862: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.865: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:39.868: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:40.065: app-3075712704| Trying to find a suitable PBM set for kernel 3.2.0-24-generic-pae. Jun 05 00:03:40.066: app-3075712704| Building module vmmon. Jun 05 00:03:40.066: app-3075712704| Extracting the sources of the vmmon module. Jun 05 00:03:40.075: app-3075712704| Building module with command: /usr/bin/make -C /tmp/vmware-root/modules/vmmon-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-24-generic-pae/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 Jun 05 00:03:42.016: app-3075712704| Failed to compile module vmmon! Is it a gcc problem ?

    Read the article

  • How to "apt-get -f install" without deleting software?

    - by Jeggy
    I know Guitar pro doesn't support 64 bit, but i did get it to work with this command jeggy@jeggy-XPS:~$ sudo dpkg --force-architecture -i GuitarPro6-rev9063.deb [sudo] password for jeggy: Selecting previously unselected package guitarpro6:i386. (Reading database ... 285729 files and directories currently installed.) Unpacking guitarpro6:i386 (from GuitarPro6-rev9063.deb) ... dpkg: dependency problems prevent configuration of guitarpro6:i386: guitarpro6:i386 depends on gksu. dpkg: error processing guitarpro6:i386 (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: guitarpro6:i386 And even after i get that error the program perfectly works fine and updating and adding PPA's to the system works great, but when I'm trying to install some other software i get this error: jeggy@jeggy-XPS:~$ sudo apt-get install elinks Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: elinks : Depends: libfsplib0 (>= 0.9) but it is not going to be installed Depends: liblua50 (>= 5.0.3) but it is not going to be installed Depends: liblualib50 (>= 5.0.3) but it is not going to be installed Depends: libtre5 but it is not going to be installed Depends: elinks-data (= 0.12~pre5-7ubuntu1) but it is not going to be installed guitarpro6:i386 : Depends: gksu:i386 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And whenever i write "apt-get -f install" i get this jeggy@jeggy-XPS:~$ sudo apt-get -f install [sudo] password for jeggy: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dconf-gsettings-backend:i386 python-levenshtein python-indicate libav-tools libstartup-notification0:i386 libxmuu1:i386 libavfilter-extra-2 libbabl-0.0-0 libgegl-0.0-0 libgconf2-4:i386 python-vobject libgtk-3-0:i386 libpam-cap:i386 python-utidylib libdconf0:i386 python-iniparse python-xmpp libpam-gnome-keyring:i386 libxcb-util0:i386 python-farstream Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: guitarpro6:i386 0 upgraded, 0 newly installed, 1 to remove and 7 not upgraded. 1 not fully installed or removed. After this operation, 84,0 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 286979 files and directories currently installed.) Removing guitarpro6:i386 ... dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/updater' not empty so not removed. dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/Data/Soundbanks' not empty so not removed. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... And now Guitar Pro is deleted. How can i install Guitar Pro and still be able to install other software afterwards?

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • Ubuntu 12.04 LTS initramfs-tools dependency issue

    - by Mike
    I know this has been asked several times, but each issue and resolution seems different. I've tried almost everything I could think of, but I can't fix this. I have a VM (VMware I think) running 12.04.03 LTS which has stuck dependencies. The VM is on a rented host, running a live system so I don't want to break it (further). uname -a Linux support 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Some more: sudo apt-get update [sudo] password for tracker: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. initramfs-tools : Depends: initramfs-tools-bin (< 0.99ubuntu13.1.1~) but 0.99ubuntu13.3 is installed E: Unmet dependencies. Try using -f. sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0 B/50.3 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.1.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.3. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of apparmor: apparmor depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing apparmor (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: initramfs-tools apparmor E: Sub-process /usr/bin/dpkg returned an error code (1) If I look at the policy behind initramfs-tools / bin I get: apt-cache policy initramfs-tools initramfs-tools: Installed: 0.99ubuntu13.1 Candidate: 0.99ubuntu13.3 Version table: 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.1 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages apt-cache policy initramfs-tools-bin initramfs-tools-bin: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.3 Version table: *** 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages So the issue seems to be I have 0.99ubuntu13.3 for initramfs-tools-bin yet 0.99ubuntu13.1 for initramfs-tools, and can't upgrade to 0.99ubuntu13.3. I've performed apt-get clean/autoclean/install -f/upgrade -f many times but they won't resolve. I can think of only 2 other 'solutions': Edit the dpkg dependency list to trick it into doing the installation with a broken dependency. This seems very dodgy and it would be a last resort Downgrade both initramfs-tools and initramfs-tools-bin to 0.99ubuntu13 from the precise/main sources and hope that would get them in step. However I'm not sure if this will be possible, or whether it would introduce more issues. I'm not sure how this situation arise in the first place. /boot was 96% full; it's now 56% full (it's tiny - 64MB ... this is what I got from the hosting company). Can anyone offer advice please?

    Read the article

  • Review&ndash;Build Android and iOS apps in Visual Studio with Nomad

    - by Bill Osuch
    Nomad is a Visual Studio extension that allows you build apps for both Android and iOS platforms in Visual Studio using HTML5. There is no need to switch between .Net, Java and Objective-C to target different platforms - write your code once in HTML5 and build for all common mobile platforms and tablets. You have access to the native hardware functions (such as camera and GPS) through the PhoneGap library, UI libraries such as jQuery mobile allow you to create an impressive UI with minimal work. Nomad is still in an early access beta stage, so the documentation is a bit sparse. In fact, the only documentation is a simple series of steps on how to install the plug-in, set up a project, build and deploy it. You're going to want to be a least a little familiar with the PhoneGap library and jQuery mobile to really tap into the power of this. The sample project included with the download shows you just how simple it is to create projects in Visual Studio. The sample solution comes with an index.html file containing the HTML5 code, the Cordova (PhoneGap) library, jQuery libraries, and a JQuery style sheet: The html file is pretty straightforward. If you haven't experimented with JQuery mobile before, some of the attributes (such as data-role) might be new to you, but some quick Googling will fill in everything you need to know. The first part of the file builds a simple (but attractive) list with some links in it: The second part of the file is where things get interesting and it taps into the PhoneGap library. For instance, it gets the geolocation position by calling position.coords.latitude and position.coords.longitude: ...and then displays it in a simple span: Building is pretty simple, at least for Android (I'm not an iOS developer so I didn't look at that feature) - just configure the display name, version number, and package ID. There's no need to specify Android version; Nomad supports 2.2 and later. Enter these bits of information, click the new "Build for Android" button (not the regular Visual Studio Build link...) and you get a dialog box saying that your code is being built by their cloud build service (so no building while away from a WiFi signal apparently). After a couple minutes you wind up with a .apk file that can be copied over to your device. Applications built with Nomad for Android currently use a temporary certificate, so you can test the app on your devices but you cannot publish them in the Google Play Store (yet). And I love the "success" dialog box: Since Nomad is still in Beta, no pricing plans have been announced yet, so I'll be curious to see if this becomes a cost-effective solution to mobile app development. If it is, I may even be tempted to spring for the $99 iOS membership fee! In the meantime, I plan to work on porting some of my apps over to it and seeing how they work. My only quibble at this time is the lack of a centralized documentation location - I'd like to at least see which (if any) features of JQuery and PhoneGap are limited or not supported. Also, some notes on targeting different Android screen sizes would be nice, but it's relatively easy to find jQuery examples out on the InterWebs. Oh well, trial and error! You can download the Nomad extension for Visual Studio by going to their web site: www.vsnomad.com. Technorati Tags: Android, Nomad

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • How to remove synaptic without installing all the unwanted packages?

    - by Jay
    I am trying to uninstall synaptic. I prefer using apt-get and other command line tools to manage my packages. So I do not need synaptic and the software manager. I'm trying to remove both of them using apt-get. Its a new box. Recently installed Linux Mint mate 15. After installation, the only thing I did was, sudo apt-get update and sudo apt-get dist-upgrade After that, I did this command for removing synaptic, sudo apt-get remove --purge synaptic But this gives me a very weird output, Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Suggested packages: libterm-readline-gnu-perl libterm-readline-perl-perl djvulibre-bin finger hspell libqca2-plugin-cyrus-sasl libqca2-plugin-gnupg libqca2-plugin-ossl phonon-backend-vlc phonon-backend-xine phonon-backend-mplayer The following packages will be REMOVED: aptoncd* apturl* mintupdate* mintwelcome* synaptic* The following NEW packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common 0 upgraded, 78 newly installed, 5 to remove and 0 not upgraded. Need to get 60.9 MB of archives. After this operation, 146 MB of additional disk space will be used. Do you want to continue [Y/n]? n Abort. As you can see, apt-get is trying to install the same packages that it is asking me to autoremove. Could someone please tell me, how to uninstall synaptic properly? Or am I missing something? Just for the record, I also did, sudo apt-get autoremove --purge like it asked me to ... and this is what I got, Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

  • Construction Paper, Legos, and Architectural Modeling

    I can remember as a kid playing with construction paper and Legos to explore my imagination. Through my exploration I was able to build airplanes, footballs, guns, and more, out of paper. Additionally I could create entire cities, robots, or anything else I could image out of Legos.  These toys, I now realize were in fact tools that gave me an opportunity to explore my ideas in the physical world through the use of modeling.  My imagination was allowed to run wild as I, unknowingly at the time, made design decisions that directly affected the models I was building from the raw materials.  To prove my point further, I can remember building a paper airplane that seemed to go nowhere when I tried to throw it. So I decided to attach a paper clip to the plane before I decided to throw it the next time to test my concept that by adding more weight to the plane that it would fly better and for longer distances. The paper airplane allowed me to model my design decision through the use of creating an artifact in that I created a paper airplane that was carrying extra weight through the incorporation of the paper clip in to the design. Also, I remember using Legos to build all sorts of creations, and these creations became artifacts of my imagination. As I further and further defined my Lego creations through the process of playing I was able to create elaborate artifacts of my imagination. These artifacts represented design decision I had made in the evolution of my creation through my child like design process. In some form or fashion the artifacts I created as a kid are very similar to the artifacts that I create when I model a software architectural concept or a software design in that the process of making decisions is directly translated in to a tangible model in the form of an architectural model. Architectural models have been defined as artifacts that depict design decisions of a system’s architecture.  The act of creating architectural models is the act of architectural modeling. Furthermore, architectural modeling is the process of creating a physical model based architectural concepts and documenting these design decisions. In the process of creating models, the standard notation used is Architectural modeling notation. This notation is the primary method of capturing the essence of design decisions regarding architecture.  Modeling notations can vary based on the need and intent of a project; typically they range from natural language to a diagram based notation. Currently, Unified Markup Language (UML) is the industry standard in terms of architectural modeling notation  because allows for architectures to be defined through a series of boxes, lines, arrows and other basic symbols that encapsulate design designs in to virtual components, connectors, configurations and interfaces.  Furthermore UML allows for additional break down of models through the use of natural language as to explain each section of the model in plain English. One of the major factors in architectural modeling is to define what is to be modeled. As a basic rule of thumb, I tend to model architecture based on the complexity of systems or sub sub-systems of architecture. Another key factor is the level of detail that is actually needed for a model. For example if I am modeling a system for a CEO to view then the low level details will be omitted. In comparison, if I was modeling a system for another engineer to actually implement I would include as much detailed information as I could to help the engineer implement my design.

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Finding Leaders Breakfasts - Adelaide and Perth

    - by rdatson-Oracle
    HR Executives Breakfast Roundtables: Find the best leaders using science and social media! Perth, 22nd July & Adelaide, 24th July What is leadership in the 21st century? What does the latest research tell us about leadership? How do you recognise leadership qualities in individuals? How do you find individuals with these leadership qualities, hire and develop them? Join the Neuroleadership Institute, the Hay Group, and Oracle to hear: 1. the latest neuroscience research about human bias, and how it applies to finding and building better leaders; 2. the latest techniques to recognise leadership qualities in people; 3. and how you can harness your people and social media to find the best people for your company. Reflect on your hiring practices at this thought provoking breakfast, where you will be challenged to consider whether you are using best practices aimed at getting the right people into your company. Speakers Abigail Scott, Hay Group Abigail is a UK registered psychologist with 10 years international experience in the design and delivery of talent frameworks and assessments. She has delivered innovative assessment programmes across a range of organisations to identify and develop leaders. She is experienced in advising and supporting clients through new initiatives using evidence-based approach and has published a number of research papers on fairness and predictive validity in assessment. Karin Hawkins, NeuroLeadership Institute Karin is the Regional Director of NeuroLeadership Institute’s Asia-Pacific region. She brings over 20 years experience in the financial services sector delivering cultural and commercial results across a variety of organisations and functions. As a leadership risk specialist Karin understands the challenge of building deep bench strength in teams and she is able to bring evidence, insight, and experience to support executives in meeting today’s challenges. Robert Datson, Oracle Robert is a Human Capital Management specialist at Oracle, with several years as a practicing manager at IBM, learning and implementing latest management techniques for hiring, deploying and developing staff. At Oracle he works with clients to enable best practices for HR departments, and drawing the linkages between HR initiatives and bottom-line improvements. Agenda 07:30 a.m. Breakfast and Registrations 08:00 a.m. Welcome and Introductions 08:05 a.m. Breaking Bias in leadership decisions - Karin Hawkins 08:30 a.m. Identifying and developing leaders - Abigail Scott 08:55 a.m. Finding leaders, the social way - Robert Datson 09:20 a.m. Q&A and Closing Remarks 09:30 a.m. Event concludes If you are an employee or official of a government organisation, please click here for important ethics information regarding this event. To register for Perth, Tuesday 22nd July, please click HERE To register for Adelaide, Thursday 24th July, please click HERE 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contact: To register or have questions on the event? Contact Aaron Tait on +61 2 9491 1404

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • Oracle at ARM TechCon

    - by Tori Wieldt
    ARM TechCon is a technical conference for hardware and software engineers, Oct. 30-Nov 1 in Santa Clara, California. Days two and three of the conference will be geared towards systems designers and software developers, those interested in building ARM processor-based modules, boards, and systems. It will cover all of the hardware and software, tools, ranging from low-power design, networking and connectivity, open source software, and security. Oracle is a sponsor of ARM TechCon, and will present three Java sessions and a hands-on-lab:  "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi" - The Raspberry Pi, an ARM-powered single board computer running a full Linux distro off an SD card has caused a huge wave of interest among developers. This session looks at how Java can be used on a device such as this. Using Java SE for embedded devices and a port of JavaFX, the presentation includes a variety of demonstrations of what the Raspberry Pi is capable of. The Raspberry Pi also provides GPIO line access, and the session covers how this can be used from Java applications. Prepare to be amazed at what this tiny board can do. (Angela Caicedo, Java Evangelist) "Modernizing the Explosion of Advanced Microcontrollers with Embedded Java" - This session explains why Oracle Java ME Embedded is the right choice for building small, connected, and intelligent embedded solutions, such as industrial control applications, smart sensing, wireless connectivity, e-health, or general machine-to-machine (M2M) functionality---extending your business to new areas, driving efficiency, and reducing cost. The new Oracle Java ME Embedded product brings the benefits of Java technology to microcontroller platforms. It is a full-featured, complete, compliant software runtime with value-add features targeted to the embedded space and has the ability to interface with additional hardware components, remote manageability, and over-the-air software updates. It is accompanied by a feature-rich set of tools free of charge. (Fareed Suliman, Java Product Manager) "Embedded Java in Smart Energy and Healthcare" - This session covers embedded Java products and technologies that enable smart and connect devices in the Smart Energy and Healthcare/Medical industries. (speaker Kevin Lee) "Java SE Embedded Development on ARM Made Easy" - This Hands-on Lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices. (speaker Jim Connors) In the Oracle booth #603, you can see the following demos: Industry Solutions with JavaThis exhibit consists of a number of industry solutions and how they can be powered by Java technology deployed on embedded systems.  Examples in consumer devices, home gateways, mobile health, smart energy, industrial control, and tablets all powered by applications running on the Java platform are shown.  Some of the solutions demonstrate the ability of Java to connect intelligent devices at the edge of the network to the datacenter or the cloud as a total end-to-end platform.Java in M2M with QualcommThis station will exhibit a new M2M solutions platform co-developed by Oracle and Qualcomm that enables wireless communications for embedded smart devices powered by Java, and share the types of industry solutions that are possible.  In addition, a new platform for wearable devices based on the ARM Cortex M3 platform is exhibited.Why Java for Embedded?Demonstration platforms will show how traditional development environments, tools, and Java programming skills can be used to create applications for embedded devices.  The advantages that Java provides because of  the runtime's abstraction of software from hardware, modularity and scalability, security, and application portability and manageability are shared with attendees. Drop by and see why Java is an optimal applications platform for embedded systems.

    Read the article

  • Top 5 characteristics Recruiters are looking for

    - by Maria Sandu
    Of course many skills and characteristics recruiters are looking for are job specific. But whether you are a graduate fresh out of college or seasoned in the workplace, recruiters are also looking for generic skills and attitude to see whether you are a good fit to the company. So make sure you prepare and show through examples that you have these skills. 1. Drive/passion Liking the job you are applying for is paramount and something recruiters are always looking for. Show and prove your drive for the role and/or the field you are applying for. Always be prepared to pitch yourself, this shows your drive in the role you are applying for. 2. Communication skills People often make the mistake by thinking this skill is related to how good they are able to talk about their background and expertise. This is important, but as least as important is it that you listen well to questions that are asked. Make sure you answer to the point and ask questions if you want questions to be clarified. This shows your interest in the role and the ability to communicate clearly. This also helps you building trust with the recruiter every time you speak to him/her. 3. Confidence Recruiters are looking for the best candidate for the job. So if you don’t think you are the best candidate why should the recruiter? Show with confidence, without being arrogant (think about building trust), why you are the right person for the job. Confidence also shows in your answers to difficult questions. Be confident enough to explain why some experiences went wrong and how you learnt from them. If you don’t have a direct explanation on a question, it is better to ask for a second to think instead of a random answer. 4. Vision The main reason to hire graduates for many companies is that graduates are perceived to be flexible. The organisation will train and up skill you in the direction best suitable for the organisation. However the most intense learning path is realised when you also know where you want to go. Companies are often happy to accommodate you to support with training and development, but if you don’t have a clear vision on what you want to achieve for yourself and what value you bring to the company, recruiters can decide you are not the right candidate as they are afraid you aren’t going to stay in the company. 5. Business awareness For every job you apply you will get challenged on your knowledge and interest for the market and business they are in. All companies add value in different ways in their respective markets. So make sure you are aware of what a company is doing, what their goal is and why and how they exist and how you can add value for the company in the role you are applying for. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Delegates: A Practical Understanding

    - by samerpaul
    It's been a while since I have written on this blog, and I'm planning on reviving it this summer, since I have more time to do so again.I've also recently started working on the iPhone platform, so I haven't been as busy in .NET as before.In either case, today's blog post applies to both C# and Objective-C, because it's more about a practical understanding of delegates than it is about code. When I was learning coding, I felt like delegates was one of the hardest things to conceptually understand, and a lot of books don't really do a good job (in my opinion) of explaining it. So here's my stab at it.A Real Life Example of DelegatesLet's say there are three of you. You, your friend, and your brother. You're each in a different room in your house so you can't hear each other, even if you shout. 1)You are playing a computer game2) Friend is building a puzzle3) Brother is nappingNow, you three are going to stay in your room but you want to be informed if anything interesting is happening to the one of you. Let's say you (playing the computer game) want to know when your brother wakes up.You could keep walking to the room, checking to see if he's napping, and then walking back to your room. But that would waste a lot of time / resources, and what if you miss when he's awake before he goes back to sleep? That would be bad.Instead, you hand him a 2-way radio that works between your room and his room. And you inform him that when he wakes up, he should press a button on the radio and say "I'm awake". You are going to be listening to that radio, waiting for him to say he's awake. This, in essence, is how a delegate works.You're creating an "object" (the radio) that allows you to listen in on an event you specify. You don't want him to send any other messages to you right now, except when he wakes up. And you want to know immediately when he does, so you can go over to his room and say hi. (the methods that are called when a delegate event fires). You're also currently specifying that only you are listening on his radio.Let's say you want your friend to come into the room at the same time as you, and do something else entirely, like fluff your brother's pillow. You will then give him an identical radio, that also hooks into your brother's radio, and inform him to wait and listen for the "i'm awake" signal.Then, when your brother wakes up, he says "I'm awake!" and both you and your friend walk into the room. You say hi, and your friend fluffs the pillow, then you both exit.Later, if you decide you don't care to say hi anymore, you turn off your radio. Now, you have no idea when your brother is awake or not, because you aren't listening anymore.So again, you are each classes in this example, and each of you have your own methods. You're playing a computer game (PlayComputerGame()), your friend is building a puzzle (BuildPuzzle()) and your brother is napping (Napping()). You create a delegate (ImAwake) that you set your brother to do, when he wakes up. You listen in on that delegate (giving yourself a radio and turning it on), and when you receive the message, you fire a new method called SayHi()). Your friend is also wired up to the same delegate (using an identical radio) and fires the method FluffPillow().Hopefully this makes sense, and helps shed some light on how delegates operate. Let me know! Feel free to drop me a line at Twitter (preferred method of contact) here: samerabousalbi

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Can't install ruby-debug-ide on Windows 7

    - by Jack Allan
    I'm running netbeans 6.8 on windows 7 pro (x64) with the bitnami stack and I'm using ruby 1.8.7-p72. Note: I can't change the version of ruby I am using because I am working with a team, this is a college project and we have only 3 weeks left before we have to hand it in. Changing the version of ruby at this time would be too much work I think. I can't debug my code with the IDE. It says I must have the fast-debugger installed but I cannot install it. When I try through the gui I get the following message: Building native extensions. This could take a while... ERROR: Error installing ruby-debug-ide: ERROR: Failed to build gem native extension. "C:/Program Files/BitNami RubyStack/ruby/bin/ruby.exe" mkrf_conf.rb Building native extensions. This could take a while... Gem files will remain installed in C:/Program Files/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/ruby-debug-ide-0.4.8 for inspection. Results logged to C:/Program Files/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/ruby-debug-ide-0.4.8/ext/gem_make.out I have tracked the problem down to a gcc not being installed... I have installed cygwin but I'm not sure what I am doing and it's still not working... Anyone know how to fix this problem? (BTW- I have already done a lot of googling on this)

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >