Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 113/679 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Monitora&ccedil;&atilde;o com Oracle Enteprrise Manager

    - by fernando.galdino
    A figura abaixo oferece uma visão geral das possibilidades de monitoramento providas pelo Oracle Enterprise Manager (OEM), que é uma ferramenta que permite gerenciar a infraestrutura de TI da empresa. Um componente importante da solução é chanado OEM Grid Control. Esse componente permite gerenciar, visualizar e monitorar diversos elementos a partir de uma mesma console. E que elementos podem ser monitorados? No conceito utilizado pelo OEM, os elementos que podem ser monitorados são chamados de Targets, e esses targets envolvem a monitoração de hosts (Windows, Linux, Solaris), Banco de Dados, Middleware, Aplicações Web, Serviços que podem ser customizados pelo administrador, Sistemas e Grupos de targets, além dos aplicativos Oracle. Cada elemento monitorado é ativado através de packs de gerenciamento. Ou seja, há uma série de packs que podem ser adquiridas conforme a necessidade, para permitir a monitoração a partir do próprio OEM Grid Control. Existem packs de monitoramento especiais para banco de dados Oracle, packs de monitoramento para Tomcat, Jboss, WebLogic, SOA Suite, Identity Management. A lista é bem extensa e darei mais detalhes em um novo post. Mas caso queira visitar, veja: http://download.oracle.com/docs/cd/B16240_01/doc/nav/overview.htm Além das packs de monitoramento, existem também plugins e conectores. Os plugins permitem o gerenciamento de elementos adicionais, tais como dispositivos de rede, servidores, banco de dados de terceiros (DB2, SQL Server), Vmware, etc. Já os conectores permitem a integração com outros softwares, tais como gerenciadores de requisições de helpdesk, de modo a integrar os alertas gerados pela ferramenta e gerar tickets em ferramentas como CA Service Desk, BMC Remedy e outros. A extensão de funcionalidades é realmente bem vasta. Num próximo post irei comentar sobre o Ops Center, um novo componente que surgiu após a aquisição da Sun. Além do Grid Control e do Ops Center, há outros componentes bem interessantes. A figura abaixo ilustra diversas camadas onde o ferramental Oracle pode ser usado para monitoração. Há uma pack que permite gerenciar os níveis de serviços em todas as camadas ilustradas. Dada uma requisição, pode-se decompor os dados de SLA em cada camada. E há também o Real User Monitoring, que trata de medir a experiência com o usuário. Falarei disso num novo post, mas basicamente a ferramenta permite acompanhar todo o tráfego de rede gerado dos usuários finais até os servidores web, e com isso rastrear como cada usuário usa a aplicação, quanto tempo ele navega pelo site, se ele enfrentou algum tipo de problema, se houve algum pedido não finalizado devido a algum problema na infraestrutura. É uma ferramenta bem interessante, falarei um pouco mais dela depois. E claro, há também componentes para a realização de testes funcionais e de carga. Em breve, aqui no blog :)

    Read the article

  • Don’t miss the Receiving Webcast on November 20th

    - by user793553
    This one-hour session is recommended for technical and functional users who are interested to know about the Receiving transactions and its debugging techniques. TOPICS WILL INCLUDE: Using generic diagnostic scripts. How to read debug logs in receiving. Data flow for various document types (PO, RMA, ISO, IOT) to help debug issues Receiving Transaction processor Generic datafixes.  See DocID 1456150.1 to sign up now!

    Read the article

  • Control Panel display as a menu does not work upon initial install of a Windows 2008 R2 server

    - by Kevin Shyr
    I've seen this a couple of times now.  Upon initial installation of a Microsoft Windows Server 2008 R2, the control panel is a button.  But if you go to the task bar property >> Start Menu tab >> Customize, you will see the option is "Display as a menu".  Click "Display as a link" and press OK, then come back to the same area and select "Display as a menu" will solve the problem.

    Read the article

  • Easy and Rapid Deployment of Application Workloads with Oracle VM

    - by Antoinette O'Sullivan
    Oracle VM is designed for easy and rapid deployment of application workloads. In addition to allowing for rapid deployment of an entire application stack, Oracle VM now gives administrators more fine-grained control of the application payloads inside the virtual machine. To get started on Oracle VM Server for x86 or Oracle VM Server fo SPARC, what better solution than to take the corresponding training course. You can take this training from your own desk, by choosing from a selection of live-virtual events already on the schedule on the Oracle University Portal. Alternatively, you can travel to an education center to take these courses. Below is a selection of in-class events already on the schedule for each course: Oracle VM Administration: Oracle VM Server for x86  Location  Date  Delivery Language  Paris, France  11 December 2013  French  Rome, Italy  22 April 2014  Italian  Budapest, Hungary  4 November 2013  Hungarian  Riga, Latvia  3 February 2014  Latvian  Oslo, Norway  9 December 2013  English  Warsaw, Poland  12 February 2014  Polish  Ljubjana, Slovenia  25 November 2013 Slovenian   Barcelona, Spain  29 October 2013  Spanish  Istanbul, Turkey  23 December 2013  Turkish  Cairo, Egypt  1 December 2013  Arabic  Johannesburg, South Africa  9 December 2013   English   Melbourne, Australia  12 February 2014  English  Sydney, Australia  25 November 2013   English   Singapore 27 November 2013    English   Montreal, Canada 18 February 2014  English  Ottawa, Canada  18 February 2014  English  Toronto, Canada  18 February 2014  English  Phoenix, AZ, United States  18 February 2014   English   Sacramento, CA, United States 18 February 2014    English   San Francisco, CA, United States 18 February 2014   English  San Jose, CA, United States  18 February 2014  English  Denver, CO, United States 22 January 2014   English  Roseville, MN, United States 10 February 2014    English   Edison, NJ, United States  18 February 2014  English  King of Prussia, PA, United States  18 February 2014  English  Reston, VA, United States  26 March 2014  English Oracle VM Server for SPARC: Installation and Configuration  Location  Date  Delivery Language  Prague, Czech Republic  2 December 2013  Czech  Paris, France  9 December 2013  French  Utrecht, Netherlands  9 December 2013  Dutch  Madrid, Spain  28 November 2013  Spanish  Dubai, United Arab Emirates  5 February 2014  English  Melbourne, Australia  31 October 2013  English  Sydney, Australia  10 February 2014  English  Tokyo, Japan  6 February 2014  Japanese  Petaling Jaya, Malaysia  23 December 2013  English  Auckland, New Zealand  21 November 2013  English  Singapore  7 November 2013  English  Toronto, Canada  25 November 2013  English  Sacramento, CA, United States  2 December 2013  English  San Francisco, CA, United States  2 December 2013  English  San Jose, CA, United States  2 December 2013  English  Caracas, Venezuela 5 November 2013   Spanish

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • Data Governance 2010 Conference in San Diego

    - by Tony Ouk
    The Data Governance Annual Conference is one of the world's most authoritative and vendor neutral event on Data Governance and Data Quality.  The conference will focus on the "how-tos" from starting a data governance and stewardship program to attaining data governance maturity with specific topics on MDM.  This year's event will be hosted June 7 through June 10 in San Diego, California. For more information, including registration details, visit the Data Governance 2010 Conference website.

    Read the article

  • Oracle anuncia la disponibilidad de Oracle Knowledge 8.5

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";} El lanzamiento más completo de la gestión del conocimiento de Oracle ayuda a las organizaciones a ofrecer las respuestas correctas en el momento adecuado a los Agentes y Clientes Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";} Continuando con su compromiso de ayudar a las organizaciones a ofrecer la mejor experiencia del cliente con la exploración de los datos empresariales, Oracle anunció Oracle Knowledge 8.5, el software líder en la industria del conocimiento que soporta la gestión web de autoservicio, servicio asistido por agente y comunidades de clientes. Oracle Knowledge 8.5 es la versión más completa desde la adquisición de InQuira en octubre de 2011. Se introduce importantes mejoras de productos, análisis de mejora y los avances en el rendimiento y la escalabilidad. Actualmente, las organizaciones entienden la necesidad imperiosa de ofrecer un servicio al cliente consistente y de alta calidad, a través de diferentes canales. Además, las organizaciones reconocen la necesidad de información que sirva para este proceso. Oracle Knowledge 8,5 proactivamente brinda conocimientos relevantes y contextuales en el punto de interacción con los agentes, con los trabajadores del conocimiento y con los clientes - ayudando a aumentar la lealtad del cliente y reducir costes. Al permitir búsquedas a través de una amplia variedad de fuentes, Oracle Knowledge 8,5 amplifica el acceso al conocimiento, una vez oculto en los sistemas múltiples, aplicaciones y bases de datos utilizadas para almacenar contenido empresarial. Nuevas capacidades: AnswerFlow : Oracle Knowledge 8.5 introduce AnswerFlow, una nueva aplicación para la resolución de problemas guiada, diseñado para mejorar la eficiencia, reducir los costes de servicio y proporcionar experiencias de servicio al cliente personalizado. Mejora la analítica del conocimiento: estandarizado en Oracle Business Intelligence Enterprise Edition, el líder en la industria de las soluciones de Business Intelligence, Oracle Knowledge 8.5 proporciona una funcionalidad analítica robusta que se puede adaptar para satisfacer las necesidades únicas de cada negocio. Soporte de idiomas mejorada: Con las capacidades mejoradas en varios idiomas disponibles en Oracle Knowledge 8.5, incluyendo soporte Natural Language Search con 16 Idiomas y búsqueda por palabra clave mejorado para la mayoría de las otras lenguas , las empresas pueden llegar rápidamente a nuevos clientes y, al mismo tiempo, reducir los costes de hacerlo. Mejora la funcionalidad iConnect:Oracle Knowledge 8.5 ofrece iConnect mejorada para una mayor facilidad de uso y rendimiento. Esta aplicación especializada en conocimiento ofrece de forma proactiva un conocimiento contextualizado directamente en las aplicaciones de CRM, permitiendo a los empleados reducir el esfuerzo para servir a los clientes. Estandarización de Plataforma y Tecnología: Oracle Knowledge 8.5 ha sido certificado en tecnologías Oracle, incluyendo Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine y Oracle Exalogic Elastic Cloud, reduciendo el coste y la complejidad para los clientes a administrar los activos de todas las plataformas."Hemos realizado importantes inversiones de desarrollo desde nuestra adquisición de InQuira, y ahora estamos muy orgullosos de anunciar la disponibilidad de estos esfuerzos con el lanzamiento de Oracle Knowledge 8.5, por lo que es más fácil para las organizaciones obtener una ventaja diferencial a un coste total de propiedad significativamente menor ". dijo David Vap,Vicepresident productos de Oracle. Puedes encontrar más información aquí: Oracle Knowledge 8.5 Oracle Customer Experience Oracle WebLogic Server Oracle Business Intelligence Enterprise Edition

    Read the article

  • DialogFX: A New Approach to JavaFX Dialogs

    - by HecklerMark
    How would you like a quick and easy drop-in dialog box capability for JavaFX? That's what I was thinking when a weekend presented itself. And never being one to waste a good weekend...  :-) After doing some "roll-your-own" basic dialog building for a JavaFX app, I recently stumbled across Anton Smirnov's work on GitHub. It was a good start, but it wasn't exactly what I was after, and ideas just kept popping up of things I'd do differently. I wanted something a bit more streamlined, a bit easier to just "drop in and use". And so DialogFX was born. DialogFX wasn't intended to be overly fancy, overly clever - just useful and robust. Here were my goals: Easy to use. A dialog "system" should be so simple to use a new developer can drop it in quickly with nearly no learning curve. A seasoned developer shouldn't even have to think, just tap in a few lines and go. Why should dialogs slow "actual development"?  :-) Defaults. If you don't specify something (dialog type, buttons, etc.), a good dialog system should still work. It may not be pretty, but it shouldn't throw gears. Sharable. It's all open source. Even the icons are in the commons, so they can be reused at will. Let's take a look at some screen captures and the code used to produce them.   DialogFX INFO dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX();        dialog.setTitleText("Info Dialog Box Example");        dialog.setMessage("This is an example of an INFO dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ERROR dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ERROR);        dialog.setTitleText("Error Dialog Box Example");        dialog.setMessage("This is an example of an ERROR dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ACCEPT dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ACCEPT);        dialog.setTitleText("Accept Dialog Box Example");        dialog.setMessage("This is an example of an ACCEPT dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX Question dialog (Yes/No) Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. Would you like to continue?");        dialog.showDialog(); DialogFX Question dialog (custom buttons) Screen captures Windows Mac  Sample code         List<String> buttonLabels = new ArrayList<>(2);        buttonLabels.add("Affirmative");        buttonLabels.add("Negative");         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. This also demonstrates the automatic wrapping of text in DialogFX. Would you like to continue?");        dialog.addButtons(buttonLabels, 0, 1);        dialog.showDialog(); A couple of things to note You may have noticed in that last example the addButtons(buttonLabels, 0, 1) call. You can pass custom button labels in and designate the index of the default button (responding to the ENTER key) and the cancel button (for ESCAPE). Optional parameters, of course, but nice when you may want them. Also, the showDialog() method actually returns the index of the button pressed. Rather than create EventHandlers in the dialog that really have little to do with the dialog itself, you can respond to the user's choice within the calling object. Or not. Again, it's your choice.  :-) And finally, I've Javadoc'ed the code in the main places. Hopefully, this will make it easy to get up and running quickly and with a minimum of fuss. How Do I Get (Git?) It? To try out DialogFX, just point your browser here to the DialogFX GitHub repository and download away! Please take a look, try it out, and let me know what you think. All feedback welcome! All the best, Mark 

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

  • TDD and WCF behavior

    - by Frederic Hautecoeur
    Some weeks ago I wanted to develop a WCF behavior using TDD. I have lost some time trying to use mocks. After a while i decided to just use a host and a client. I don’t like this approach but so far I haven’t found a good and fast solution to use Unit Test for testing a WCF behavior. To Implement my solution I had to : Create a Dummy Service Definition; Create the Dummy Service Implementation; Create a host; Create a client in my test; Create and Add the behavior; Dummy Service Definition This is just a simple service, composed of an Interface and a simple implementation. The structure is aimed to be easily customizable for my future needs.   Using Clauses : 1: using System.Runtime.Serialization; 2: using System.ServiceModel; 3: using System.ServiceModel.Channels; The DataContract: 1: [DataContract()] 2: public class MyMessage 3: { 4: [DataMember()] 5: public string MessageString; 6: } The request MessageContract: 1: [MessageContract()] 2: public class RequestMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myRequest; 9: } The response MessageContract: 1: [MessageContract()] 2: public class ResponseMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myResponse; 9: } The ServiceContract: 1: [ServiceContract(Name="DummyService", Namespace="http://dummyservice",SessionMode=SessionMode.Allowed )] 2: interface IDummyService 3: { 4: [OperationContract(Action="Perform", IsOneWay=false, ProtectionLevel=System.Net.Security.ProtectionLevel.None )] 5: ResponseMessage DoThis(RequestMessage request); 6: } Dummy Service Implementation 1: public class DummyService:IDummyService 2: { 3: #region IDummyService Members 4: public ResponseMessage DoThis(RequestMessage request) 5: { 6: ResponseMessage response = new ResponseMessage(); 7: response.myHeader = "Response"; 8: response.myResponse = new MyMessage(); 9: response.myResponse.MessageString = 10: string.Format("Header:<{0}> and Request was <{1}>", 11: request.myHeader, request.myRequest.MessageString); 12: return response; 13: } 14: #endregion 15: } Host Creation The most simple host implementation using a Named Pipe binding. The GetBinding method will create a binding for the host and can be used to create the same binding for the client. 1: public static class TestHost 2: { 3: 4: internal static string hostUri = "net.pipe://localhost/dummy"; 5:  6: // Create Host method. 7: internal static ServiceHost CreateHost() 8: { 9: ServiceHost host = new ServiceHost(typeof(DummyService)); 10:  11: // Creating Endpoint 12: Uri namedPipeAddress = new Uri(hostUri); 13: host.AddServiceEndpoint(typeof(IDummyService), GetBinding(), namedPipeAddress); 14:  15: return host; 16: } 17:  18: // Binding Creation method. 19: internal static Binding GetBinding() 20: { 21: NamedPipeTransportBindingElement namedPipeTransport = new NamedPipeTransportBindingElement(); 22: TextMessageEncodingBindingElement textEncoding = new TextMessageEncodingBindingElement(); 23:  24: return new CustomBinding(textEncoding, namedPipeTransport); 25: } 26:  27: // Close Method. 28: internal static void Close(ServiceHost host) 29: { 30: if (null != host) 31: { 32: host.Close(); 33: host = null; 34: } 35: } 36: } Checking the service A simple test tool check the plumbing. 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Open(); 7:  8: using (ChannelFactory<IDummyService> channel = 9: new ChannelFactory<IDummyService>(TestHost.GetBinding() 10: , new EndpointAddress(TestHost.hostUri))) 11: { 12: IDummyService svc = channel.CreateChannel(); 13: try 14: { 15: RequestMessage request = new RequestMessage(); 16: request.myHeader = Guid.NewGuid().ToString(); 17: request.myRequest = new MyMessage(); 18: request.myRequest.MessageString = "I want some beer."; 19:  20: ResponseMessage response = svc.DoThis(request); 21: } 22: catch (Exception ex) 23: { 24: Assert.Fail(ex.Message); 25: } 26: } 27: host.Close(); 28: } 29: } Running the service should show that the client and the host are running fine. So far so good. Adding the Behavior Add a reference to the Behavior project and add the using entry in the test class. We just need to add the behavior to the service host : 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Description.Behaviors.Add(new MyBehavior()); 7: host.Open();¨ 8: …  If you set a breakpoint in your behavior and run the test in debug mode, you will hit the breakpoint. In this case I used a ServiceBehavior. To add an Endpoint behavior you have to add it to the endpoints. 1: host.Description.Endpoints[0].Behaviors.Add(new MyEndpointBehavior()) To add a contract or an operation behavior a custom attribute should work on the service contract definition. I haven’t tried that yet.   All the code provided in this blog and in the following files are for sample use. Improvements I don’t like to instantiate a client and a service to test my behaviors. But so far I have' not found an easy way to do it. Today I am passing a type of endpoint to the host creator and it creates the right binding type. This allows me to easily switch between bindings at will. I have used the same approach to test Mex Endpoints, another post should come later for this. Enjoy !

    Read the article

  • Challenges and Opportunities to Drive Change in the Healthcare System Explored at America’s Health Insurance Plans Exchange Conference and Institute 2013

    - by elaine blog
    The program theme at the June America’s Health Insurance Plans (AHIP) Exchange Conference and AHIP’s Institute 2013 was Transforming Our Health Care System: Navigating and Succeeding in the New Marketplace.  Topics included care delivery transformation, innovation for a new healthcare eco system, Health Insurance Exchanges, the nexus of consumerism, retail and healthcare, driving value through improved operations and leveraging technology, data and innovation to transform care. Oracle participated as a sponsor of both conferences, signaling the significant investment and activity Oracle continues to make in helping health plans, providers and government agencies become more efficient and more relevant in the healthcare market place. AHIP is a national trade association representing the health insurance industry. AHIP’s members provide health and supplemental benefits to more than 200 million Americans through employer-sponsored coverage, the individual insurance market and public programs such as Medicare and Medicaid.   AHIP advocates for public policies that expand access to affordable health care. Health plans are focusing on the Health Insurance Exchanges and the opportunities they offer to provide better access and higher quality healthcare.  With the opportunities come operational challenges to implementation and innovative technology solutions to consider.   At the Exchange Conference, Oracle hosted a breakfast symposium on “Strategies for Success:  Driving Business Transformation in the Growing Health Insurance Exchange Market”. With Health Insurance Exchanges as catalysts for change, attendees learned about how to achieve integration within an Exchange and deploy new business strategies to support health reform initiatives. Discussion covered steps and processes to successfully establish and implement enrollment systems, quote to card activities, program pricing, claims billing, automated claims processing and new customer service tools. Piyush Pushkar, COO of Benefitalign, an Oracle partner that provides solutions to adopt innovative business models for retail, HIX, consumer-centric health plan and benefits administration, spoke on the state of the Exchanges in the U.S. and the activities health plans are engaged in to support individuals entering the healthcare system, including sales automation, member enrollment automation/portals and integration strategies with the Exchanges. The Oracle and Benefitalign partnership allows seamless integration between a health plan enrollment solution with the HIX individual market and allows for the health plan to customize and characterize the offerings available to the HIX that may or may not be available through other channels.  This approach can benefit the health plan through separation of interests, but also because some state-run HIXs require such separation. Janice W. Young, Program Director, Payer IT Strategies, IDC Health Insights, reviewed a survey of health plans on their investment priorities for this last year as well as this year.  She also identified the 2013-2015 strategies of go/get to market with front end and compliance investments; leveraging existing business processes and internal technologies; and establishing best practices.  Of key interest to the audience was a reform era payer solutions platform overview mapping technologies to support the business operations. David Bonham of the Oracle Health Insurance organization moderated the panel and spoke on Oracle’s presence in healthcare and products for payers to help them drive efficiencies and gain a competitive advantage in an ever changing market. Oracle serves healthcare stakeholders with applications such as billing, rating and underwriting, analytics, CRM, enrollment, and products for processing of health insurance claims including pricing and benefits administration, as well as payment of providers through alternative, non-fee for service reimbursement methods. Oracle in Healthcare….Did you know? More than 80 healthcare payers run Oracle applications. More than 300 leading healthcare providers run Oracle applications. 10 out of the top 12 fortune Global 500 healthcare organizations run Oracle applications. For more information on Oracle solutions for healthcare payers, please visit oracle.com/insurance or these individual solution pages: Oracle Health Insurance Components Oracle Insurance Insbridge Rating and Underwriting Oracle Insurance Revenue Management and Billing Oracle Documaker Oracle Healthcare Oracle CRM Related Resources Webcast On Demand: Strategies for Success: Driving Business Transformation in the Growing Health Insurance Exchange Market Strategy Brief: Executing on the Individual Mandate: Opportunities and Challenges for Healthcare Payers White Paper: White paper: Navigating Alternative Provider Reimbursement Models of the Future Strategy Brief: Enterprise Rating Agility Improves Payer Response to Healthcare Reform Podcast: Technology Implications of Healthcare Reform Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Native packaging for JavaFX

    - by igor
    JavaFX 2.2 adds new packaging option for JavaFX applications, allowing you to package your application as a "native bundle". This gives your users a way to install and run your application without any external dependencies on a system JRE or FX SDK. I'd like to give you an overview of what is it, motivation behind it, and finally explain how to get started with it. Screenshots may give you some idea of user experience but first hand experience is always the best. Before we go into all of the boring details, here are few different flavors of Ensemble for you to try: exe, msi, dmg, rpm installers and zip of linux bundle for non-rpm aware systems. Alternatively, check out native packages for JFXtras 2. Whats wrong with existing deployment options? JavaFX 2 applications are easy to distribute as a standalone application or as an application deployed on the web (embedded in the web page or as link to launch application from the webpage). JavaFX packaging tools, such as ant tasks and javafxpackager utility, simplify the creation of deployment packages even further. Why add new deployment options? JavaFX applications have implicit dependency on the availability of Java and JavaFX runtimes, and while existing deployment methods provide a means to validate the system requirements are met -- and even guide user to perform required installation/upgrades -- they do not fully address all of the important scenarios. In particular, here are few examples: the user may not have admin permissions to install new system software if the application was certified to run in the specific environment (fixed version of Java and JavaFX) then it may be hard to ensure user has this environment due to an autoupdate of the system version of Java/JavaFX (to ensure they are secure). Potentially, other apps may have a requirement for a different JRE or FX version that your app is incompatible with. your distribution channel may disallow dependencies on external frameworks (e.g. Mac AppStore) What is a "native package" for JavaFX application? In short it is  A Wrapper for your JavaFX application that makes is into a platform-specific application bundle Each Bundle is self-contained and includes your application code and resources (same set as need to launch standalone application from jar) Java and JavaFX runtimes (private copies to be used by this application only) native application launcher  metadata (icons, etc.) No separate installation is needed for Java and JavaFX runtimes Can be distributed as .zip or packaged as platform-specific installer No application changes, the same jar app binaries can be deployed as a native bundle, double-clickable jar, applet, or web start app What is good about it: Easy deployment of your application on fresh systems, without admin permissions when using .zip or a user-level installer No-hassle compatibility.  Your application is using a private copy of Java and JavaFX. The developer (you!) controls when these are updated. Easily package your application for Mac AppStore (or Windows, or...) Process name of running application is named after your application (and not just java.exe)  Easily deploy your application using enterprise deployment tools (e.g. deploy as MSI) Support is built in into JDK 7u6 (that includes JavaFX 2.2) Is it a silver bullet for the deployment that other deployment options will be deprecated? No.  There are no plans to deprecate other deployment options supported by JavaFX, each approach addresses different needs. Deciding whether native packaging is a best way to deploy your application depends on your requirements. A few caveats to consider: "Download and run" user experienceUnlike web deployment, the user experience is not about "launch app from web". It is more of "download, install and run" process, and the user may need to go through additional steps to get application launched - e.g. accepting a browser security dialog or finding and launching the application installer from "downloads" folder. Larger download sizeIn general size of bundled application will be noticeably higher than size of unbundled app as a private copy of the JRE and JavaFX are included.  We're working to reduce the size through compression and customizable "trimming", but it will always be substantially larger than than an app that depends on a "system JRE". Bundle per target platformBundle formats are platform specific. Currently a native bundle can only be produced for the same system you are building on.  That is, if you want to deliver native app bundles on Windows, Linux and Mac you will have to build your project on all three platforms. Application updates are the responsibility of developerWeb deployed Java applications automatically download application updates from the web as soon as they are available. The Java Autoupdate mechanism takes care of updating the Java and JavaFX runtimes to latest secure version several times every year. There is no built in support for this in for bundled applications. It is possible to use 3rd party libraries (like Sparkle on Mac) to add autoupdate support at application level.  In a future version of JavaFX we may include built-in support for autoupdate (add yourself as watcher for RT-22211 if you are interested in this) Getting started with native bundles First, you need to get the latest JDK 7u6 beta build (build 14 or later is recommended). On Windows/Mac/Linux it comes with JavaFX 2.2 SDK as part of JDK installation and contains JavaFX packaging tools, including: bin/javafxpackagerCommand line utility to produce JavaFX packages. lib/ant-javafx.jar Set of ant tasks to produce JavaFX packages (most recommended way to deploy apps) For general information on how to use them refer to the Deploying JavaFX Application guide. Once you know how use these tools to package your JavaFX application for other deployment methods there are only a few minor tweaks necessary to produce native bundles: make sure java is used from JDK7u6 bundle you have installed adjust your PATH settings if needed  if you are using ant tasks add "nativeBundles=all" attribute to fx:deploy task if you are using javafxpackager pass "-native" option to deploy command or if you are using makeall command then it will try build native packages by default result bundles will be in the "bundles" folder next to other deployment artifacts Note that building some types of native packages (e.g. .exe or .msi) may require additional free 3rd party software to be installed and available on PATH. As of JDK 7u6 build 14 you could build following types of packages: Windows bundle image EXE Inno Setup 5 or later is required Result exe will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop) Application will be launched at the end of install MSI WiX 3.0 or later is required Result MSI will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop)  MacOS bundle image dmg (drag and drop) installer Linux bundle image rpm rpmbuild is required shortcut will be added to the programs menu If you are using Netbeans for producing the deployment packages then you will need to add custom build step to the build.xml to execute the fx:deploy task with native bundles enabled. Here is what we do for BrickBreaker sample: <target name="-post-jfx-deploy"> <fx:deploy width="${javafx.run.width}" height="${javafx.run.height}" nativeBundles="all" outdir="${basedir}/${dist.dir}" outfile="${application.title}"> <fx:application name="${application.title}" mainClass="${javafx.main.class}"> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="BrickBreaker.jar"/> </fx:resources> <info title="${application.title}" vendor="${application.vendor}"/> </fx:application> </fx:deploy> </target> This is pretty much regular use of fx:deploy task, the only special thing here is nativeBundles="all". Perhaps the easiest way to try building native bundles is to download the latest JavaFX samples bundle and build Ensemble, BrickBreaker or SwingInterop. Please give it a try and share your experience. We need your feedback! BTW, do not hesitate to file bugs and feature requests to JavaFX bug database! Wait! How can i ... This entry is not a comprehensive guide into native bundles, and we plan to post on this topic more. However, I am sure that once you play with native bundles you will have a lot of questions. We may not have all the answers, but please do not hesitate to ask! Knowing all of the questions is the first step to finding all of the answers.

    Read the article

  • Back home :-)

    - by Mike Dietrich
    Wrote this entry last night in the ICE from Stuttgart to Munich but the conncetion broke: 28.5 hour journey - and close by now. Actually I would have been even closer if our TGV wouldn't have had break problems as soon as we had entered German territory. And you don't want a train which goes up to a speed of 200 mph having issues with its breaks, right? So we missed the connection in Stuttgart but I've catched the last train this night towards Munich. Distance approx 1900 km all together. Usually it takes 2.5 hours with a direct flight with Air Lingus from Munich or a bit more when you'll go through Zurich or Frankfurt. But at least you meet more people and see a bit more from the landscapes passing by :-) Except for the break problem everything worked out well so far (I'm no there finally!). I had 4 hours to change in Paris from Gare de Nord to Gare de l'Est and one thing I really have to point out: the people working for SNCF, the French National Railways, were so organized and helpful, purely amazing. I asked the man at the counter where I had to pick up my prepaid tickets for directions to Gare de l'Est - and after we had a chat about Marlene Dietrich he just grabbed his iPhone, started Google Earth and showed me the way to walk. I pretty sure it's a stupid stereotype that people in Paris or France are so unfriendly to foreigners if they don't speak French. In my past 3 stays or travels to Paris in the past 2 years I had only great experiences. And another thing I really enjoy when being in France: the food!!! The sandwich I had at the train station was packed with yummy goat cheese. And there's always Paul. You might ask yourself: Who the heck is Paul? That's Paul - or actually their website. And at Paul's they serve usually excellent fruit tartes - and this time a nice Gateau Au Chocolate. And very good Cafe Cremé as well :-) That's actually the positive part traveling this way: the food you'll get is much better than the airline food - if your airline still serves something called food ...

    Read the article

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

  • OAF Page to Upload Files into Server from local Machine

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarFileUploadDemo   Automatically a new OA Project will also be created   Project Name -- FileUploadDemo Default Package -- prajkumar.oracle.apps.fnd.fileuploaddemo   2. Create a New Application Module (AM) Right Click on FileUploadDemo > New > ADF Business Components > Application Module Name -- FileUploadAM Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.server Check Application Module Class: FileUploadAMImpl Generate JavaFile(s)   3. Create a New Page Right click on FileUploadDemo > New > Web Tier > OA Components > Page Name -- FileUploadPG Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.webui   4. Select the FileUploadPG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties --     Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.fileuploaddemo.server.FileUploadAM Window Title Uploading File into Server from Local Machine Demo Window Title Uploading File into Server from Local Machine Demo     6. Create Stack Layout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN AM Definition messageComponentLayout   7. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   8. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Submit Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   9. Create Controller for page FileUploadPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.fileuploaddemo.webui Class Name: FileUploadCO   Write Following Code in FileUploadCO processFormRequest   import oracle.cabo.ui.data.DataObject; import java.io.FileOutputStream; import java.io.InputStream; import oracle.jbo.domain.BlobDomain; import java.io.File; import oracle.apps.fnd.framework.OAException; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) { super.processFormRequest(pageContext, webBean);    if(pageContext.getParameter("Submit")!=null)  {   upLoadFile(pageContext,webBean);      } }   -- Use Following Code if want to Upload Files in Local Machine -- ----------------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "D:\\PRajkumar";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload"); //FileUploading is my MessageFileUpload Bean Id   if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");  // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");  // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("D:\\PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   -- Use Following Code if want to Upload File into Server -- ------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "/u01/app/apnac03r12/PRajkumar/";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload");  //FileUploading is my MessageFileUpload Bean Id     if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");   // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("/u01/app/apnac03r12/PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   10. Congratulation you have successfully finished. Run Your page and Test Your Work           -- Used Code to Upload files into Server   -- Before Upload files into Server     -- After Upload files into Server       -- Used Code to Upload files into Local Machine   -- Before Upload files into Local Machine       -- After Upload files into Local Machine

    Read the article

  • links for 2010-06-03

    - by Bob Rhubart
    @rluttikhuizen: Fault handling in Oracle SOA Suite 11g "When it comes to technical faults," says  Oracle ACE Ronald van Luttikhuizen, "you probably do not want to design error handling in the process itself." (tags: soa oracleace oracle otn) Adrian Campbell: Enterprise Architecture and Zombies EA blogger Adrian Campbell invokes Harry Potter, the Lord of the Rings, Black Adder, and "Pride and Prejudice and Zombies" in this interpretation of Gartner's 10 EA pitfalls. (tags: entarch zombies gartner) Nathalie Roman: Oracle Forms -- alive and kicking Oracle ACE Director Nathalie Roman offers details on a recent Oracle Forms Modernization seminar.  (tags: oracle otn oracleace fusionmiddleware soa) Trond-Arne Undheim: Is Openness at the heart of the EU Digital Agenda? Trond-Arne Undheim shares some insight into the upcoming OpenForum Europe Summit 2010, to be held in Brussels. (tags: oracle otn entarch architect) Chris Raby: Oracle Financial Analytics Presentations and Photos Chris Raby shares details on Rittman Mead's series of seminars that combine the company's in-depth technical knowledge with a greater focus on the business perspective.  (tags: entarch bi architect oracle otn) June Oracle Technology Network NEW Member Benefits - books books and more books!!! Details on how OTN members can get discounts on books from APress, CRC, Pearson, and Packt Publishing.  (tags: oracle otn community books discounts) Manoj Neelapu: Oracle Service Bus + SOA in same server Manoj Neelapu's  tutorial covers on how to do create a domain in which SOA and Oracle Service Bus run in a single JVM . (tags: oracle otn soa architect)

    Read the article

  • Different Means Better with the new Windows Phone Developer Experience

    - by Nikita Polyakov
    If you are interested in the building mobile applications or have been in the past you might want to check out this blog post: Charlie Kindel - Different Means Better with the new Windows Phone Developer Experience What does this mean? Let me take some out takes and highlight them for you. It won’t come as a surprise to many to learn that the Windows Phone 7 developer experience builds upon the following GIANTS (among others): .NET Silverlight XNA platform Microsoft’s developer tools Web 2.0 standards and To enable the fantastic user experiences you’ve seen in the Windows Phone 7 Series demos so far we’ve had to break from the past. To deliver what developers expect in the developer platform we’ve had to change how phone apps were written. One result of this is previous Windows mobile applications will not run on Windows Phone 7 Series. To be clear, we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come, so it’s not as though one line ends as soon as the other begins. Once again, more details at MIX10. Start watching the @WP7Dev twitter account for more info.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Issue 15: Introducing David Callaghan

    - by rituchhibber
        DAVID'S VIEW INTRODUCING DAVID CALLAGHAN David Callaghan Senior Vice President, Oracle EMEA Alliances and Channels David Callaghan is the Senior Vice President, Alliances & Channels, for Oracle EMEA. He is responsible for all elements of the Oracle Partner Network across the region and leads Oracle as it continues to deliver customer success through the alignment of Oracle's applications and hardware engineered to work together. As I reflect on our last quarter, I thank all our partners for your continued commitment and expertise in embracing the unique opportunity we have before us. The ability to engage with hardware, applications and technology is a real differentiator. We have been able to engage with deep specialization in individual products for some time, which has brought tremendous benefits. But now we can strengthen this further with the broad stack specialization that Oracle on Oracle brings. Now is the time to make that count. While customers are finishing spending this year's budget and planning their spend for the next calendar year, it is now that we need to build the quality opportunities and pipeline for the rest of the year. We have OpenWorld just around the corner with its compelling new product announcements and environment to engage customers at all levels. Make sure you use this event, and every opportunity it brings. In the next quarter you can expect to see targeted 'value creation' campaigns driven by Oracle, and I encourage you to exploit these where they will have greatest impact. My team will be engaging closely with their Oracle sales colleagues to help them leverage the tremendous value you bring, and to develop their ability to work effectively and independently with you, our partners. My team and I are all relentlessly committed to achieving partner, and customer, satisfaction to demonstrate the value of the Passion for Partnering that we all share. With best regards David Back to the welcome page

    Read the article

  • Silverlight Cream for January 01, 2011 -- #1020

    - by Dave Campbell
    In this short New Year's Day 2011 Issue, 3 Mikes: Mike Taulty, Mike Snow, and Mike Ormond. Above the Fold: Silverlight: "Native Extensions for Silverlight (NESL)?" Mike Taulty WP7: "Monitoring Memory Usage on Windows Phone 7" Mike Ormond From SilverlightCream.com: Native Extensions for Silverlight (NESL)? Mike Taulty has a really good write-up on Native Extensions for Silverlight... he describes what that project is about and gives guidance on best practices. Win7 Mobile: Uniquely Identifying a Device or User Mike Snow has a post up describing how to uniquely identify the phone or device your app is running on using the Microsoft.Phone.Info.DeviceExtendedProperties namespace Monitoring Memory Usage on Windows Phone 7 Mike Ormond has a post up showing how to turn on and make use of the framerate counters in WP7 Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Larry Ellison and Mark Hurd on Oracle Cloud

    - by arungupta
    Oracle Cloud provides Java and Database as Platform Services and Customer Relationship Management, Human Capital Management, and Social Network as Application Services. Watch a live webcast with Larry Ellison and Mark Hurd on announcements about Oracle Cloud. Date ? Wednesday, June 06, 2012 Time ? 1:00 p.m. PT – 2:30 p.m. PT Register here for the webinar. You can also attend the live event by registering here. Oracle Cloud is by invitation only at this time and you can register for access here.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Creating Wildcard Certificates with makecert.exe

    - by Shawn Cicoria
    Be nice to be able to make wildcard certificates for use in development with makecert – turns out, it’s real easy.  Just ensure that your CN=  is the wildcard string to use. The following sequence generates a CA cert, then the public/private key pair for a wildcard certificate REM make the CA makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx REM now make the server wildcard cert makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >