Search Results

Search found 859 results on 35 pages for 'nibot lab'.

Page 1/35 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • SharePoint 2010 MSDN Labs

    - by Kelly Jones
    Eric Ligman, from Microsoft, posted a great blog post this week listing all of the SharePoint 2010 Virtual Labs that are available from Microsoft.  His blog entry is here: http://blogs.msdn.com/b/mssmallbiz/archive/2012/03/13/sharepoint-server-2010-msdn-virtual-labs-available-to-you-online-plus-more-sharepoint-2010-resources.aspx He also posted other resources as well. I’ve copied his Virtual Lab links here: SharePoint Server 2010 Virtual Labs MSDN Virtual Lab: SharePoint Server 2010: Introduction MSDN Virtual Lab: Getting Started with SharePoint 2010 MSDN Virtual Lab: SharePoint 2010 User Interface Advancements MSDN Virtual Lab: SharePoint Server 2010 Connectors & Using the Business Data Connectivity (BDC) Service MSDN Virtual Lab: SharePoint Server 2010: Advanced Search Security MSDN Virtual Lab: SharePoint Server 2010: Configuring Search UIs MSDN Virtual Lab: SharePoint Server 2010: Content Processing and Property Extraction MSDN Virtual Lab: SharePoint Server 2010: Developing a Custom Connector MSDN Virtual Lab: SharePoint Server 2010: Fast Search Web Crawler MSDN Virtual Lab: SharePoint Server 2010: Federated Search MSDN Virtual Lab: SharePoint Server 2010: Linguistics MSDN Virtual Lab: SharePoint Server 2010: People Search Administration and Management MSDN Virtual Lab: SharePoint Server 2010: Relevancy and Ranking MSDN Virtual Lab: Customizing MySites MSDN Virtual Lab: Designing Lists and Schemas MSDN Virtual Lab: Developing a BCS External Content Type with Visual Studio 2010 MSDN Virtual Lab: Developing a Sandboxed Solution with Web Parts MSDN Virtual Lab: Developing a Visual Web Part in Visual Studio 2010 MSDN Virtual Lab: Developing Business Intelligence Applications MSDN Virtual Lab: Enterprise Content Management MSDN Virtual Lab: LINQ to SharePoint 2010 MSDN Virtual Lab: Visual Studio SharePoint Tools MSDN Virtual Lab: Workflow In addition to the SharePoint Server 2010 Virtual Labs, here are a few other SharePoint 2010 resources that I thought you might also be interested in: Technical reference for Microsoft SharePoint Server 2010 SharePoint 2010: IT Pro Evaluation Guide Connecting SharePoint 2010 to Line-of-Business Systems to Deliver Business-Critical Solutions Configure SharePoint Server 2010 as a Single Server with Microsoft SQL Server: Test Lab Guide Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 Deploying FAST Search Server 2010 for SharePoint FAST Search Server 2010 for SharePoint Add or Remove an Index Column Upgrade worksheet for SharePoint Server 2010 Microsoft SharePoint Server 2010 Technical Library in Compiled Help format Microsoft SharePoint Foundation 2010 Technical Library in Compiled Help format Microsoft FAST Search Server 2010 for SharePoint Technical Library in Compiled Help format Microsoft Reseller partner Learning Path Microsoft solutions partners and ISVs Learning Path Microsoft partner Practice Accelerator for SharePoint Microsoft partner SharePoint 2010 Internal Use Licenses SharePoint Case Studies SharePoint MSDN Forums SharePoint TechNet Forums Microsoft SharePoint 2010 page on Microsoft Partner Network portal

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • How can I split Excel data from one row into multiple rows

    - by Lenny
    Good afternoon, Is there a way to split data from one row and store to separate rows? I have a large file that contains scheduling information and I'm trying to develop a list that comprises each combination of course, day, term and period per line. For example I have a file similiar to this: Crs:Sn Title Tchr TchrName Room Days Terms Periods 7014:01 English I 678 JUNG 300 M,T,W,R,F 3,4 2,3 1034:02 English II 123 MOORE 352 M,T,W,R,F 3 4 7144:02 Algebra 238 VYSOTSKY 352 M,T,W,R,F 3,4 3,4 0180:06 Pub Speaking 23 ROSEN 228 M,T,W,R,F 3,4 5 7200:03 PE I 244 HARILAOU GYM 4 M,T,W,R,F 1,2,3 3 2101:01 Physics/Lab 441 JONES 348 M,T,W,R,F 1,2,3,4 2,3 Should extract to this in an excel file: Crs:Sn Title Tchr# Tchr Room Days Terms Period 7014:01 English I 678 JUNG 300 M 3 2 7014:01 English I 678 JUNG 300 T 3 2 7014:01 English I 678 JUNG 300 W 3 2 7014:01 English I 678 JUNG 300 R 3 2 7014:01 English I 678 JUNG 300 F 3 2 7014:01 English I 678 JUNG 300 M 4 2 7014:01 English I 678 JUNG 300 T 4 2 7014:01 English I 678 JUNG 300 W 4 2 7014:01 English I 678 JUNG 300 R 4 2 7014:01 English I 678 JUNG 300 F 4 2 7014:01 English I 678 JUNG 300 M 3 3 7014:01 English I 678 JUNG 300 T 3 3 7014:01 English I 678 JUNG 300 W 3 3 7014:01 English I 678 JUNG 300 R 3 3 7014:01 English I 678 JUNG 300 F 3 3 7014:01 English I 678 JUNG 300 M 4 3 7014:01 English I 678 JUNG 300 T 4 3 7014:01 English I 678 JUNG 300 W 4 3 7014:01 English I 678 JUNG 300 R 4 3 7014:01 English I 678 JUNG 300 F 4 3 1034:02 English II 123 MOORE 352 M 3 4 1034:02 English II 123 MOORE 352 T 3 4 1034:02 English II 123 MOORE 352 W 3 4 1034:02 English II 123 MOORE 352 R 3 4 1034:02 English II 123 MOORE 352 F 3 4 7144:02 Algebra 238 VYSOTSKY 352 M 3 3 7144:02 Algebra 238 VYSOTSKY 352 T 3 3 7144:02 Algebra 238 VYSOTSKY 352 W 3 3 7144:02 Algebra 238 VYSOTSKY 352 R 3 3 7144:02 Algebra 238 VYSOTSKY 352 F 3 3 7144:02 Algebra 238 VYSOTSKY 352 M 4 3 7144:02 Algebra 238 VYSOTSKY 352 T 4 3 7144:02 Algebra 238 VYSOTSKY 352 W 4 3 7144:02 Algebra 238 VYSOTSKY 352 R 4 3 7144:02 Algebra 238 VYSOTSKY 352 F 4 3 7144:02 Algebra 238 VYSOTSKY 352 M 3 4 7144:02 Algebra 238 VYSOTSKY 352 T 3 4 7144:02 Algebra 238 VYSOTSKY 352 W 3 4 7144:02 Algebra 238 VYSOTSKY 352 R 3 4 7144:02 Algebra 238 VYSOTSKY 352 F 3 4 7144:02 Algebra 238 VYSOTSKY 352 M 4 4 7144:02 Algebra 238 VYSOTSKY 352 T 4 4 7144:02 Algebra 238 VYSOTSKY 352 W 4 4 7144:02 Algebra 238 VYSOTSKY 352 R 4 4 7144:02 Algebra 238 VYSOTSKY 352 F 4 4 0180:06 Pub Speaking 23 ROSEN 228 M 3 5 0180:06 Pub Speaking 23 ROSEN 228 T 3 5 0180:06 Pub Speaking 23 ROSEN 228 W 3 5 0180:06 Pub Speaking 23 ROSEN 228 R 3 5 0180:06 Pub Speaking 23 ROSEN 228 F 3 5 0180:06 Pub Speaking 23 ROSEN 228 M 4 5 0180:06 Pub Speaking 23 ROSEN 228 T 4 5 0180:06 Pub Speaking 23 ROSEN 228 W 4 5 0180:06 Pub Speaking 23 ROSEN 228 R 4 5 0180:06 Pub Speaking 23 ROSEN 228 F 4 5 7200:03 PE I 244 HARILAOU GYM 4 M 1 3 7200:03 PE I 244 HARILAOU GYM 4 M 2 3 7200:03 PE I 244 HARILAOU GYM 4 M 3 3 7200:03 PE I 244 HARILAOU GYM 4 T 1 3 7200:03 PE I 244 HARILAOU GYM 4 T 2 3 7200:03 PE I 244 HARILAOU GYM 4 T 3 3 7200:03 PE I 244 HARILAOU GYM 4 W 1 3 7200:03 PE I 244 HARILAOU GYM 4 W 2 3 7200:03 PE I 244 HARILAOU GYM 4 W 3 3 7200:03 PE I 244 HARILAOU GYM 4 R 1 3 7200:03 PE I 244 HARILAOU GYM 4 R 2 3 7200:03 PE I 244 HARILAOU GYM 4 R 3 3 7200:03 PE I 244 HARILAOU GYM 4 F 1 3 7200:03 PE I 244 HARILAOU GYM 4 F 2 3 7200:03 PE I 244 HARILAOU GYM 4 F 3 3 2101:01 Physics/Lab 441 JONES 348 M 1 2 2101:01 Physics/Lab 441 JONES 348 M 2 2 2101:01 Physics/Lab 441 JONES 348 M 3 2 2101:01 Physics/Lab 441 JONES 348 M 4 2 2101:01 Physics/Lab 441 JONES 348 T 1 2 2101:01 Physics/Lab 441 JONES 348 T 2 2 2101:01 Physics/Lab 441 JONES 348 T 3 2 2101:01 Physics/Lab 441 JONES 348 T 4 2 2101:01 Physics/Lab 441 JONES 348 W 1 2 2101:01 Physics/Lab 441 JONES 348 W 2 2 2101:01 Physics/Lab 441 JONES 348 W 3 2 2101:01 Physics/Lab 441 JONES 348 W 4 2 2101:01 Physics/Lab 441 JONES 348 R 1 2 2101:01 Physics/Lab 441 JONES 348 R 2 2 2101:01 Physics/Lab 441 JONES 348 R 3 2 2101:01 Physics/Lab 441 JONES 348 R 4 2 2101:01 Physics/Lab 441 JONES 348 F 1 2 2101:01 Physics/Lab 441 JONES 348 F 2 2 2101:01 Physics/Lab 441 JONES 348 F 3 2 2101:01 Physics/Lab 441 JONES 348 F 4 2 2101:01 Physics/Lab 441 JONES 348 M 1 3 2101:01 Physics/Lab 441 JONES 348 M 2 3 2101:01 Physics/Lab 441 JONES 348 M 3 3 2101:01 Physics/Lab 441 JONES 348 M 4 3 2101:01 Physics/Lab 441 JONES 348 T 1 3 2101:01 Physics/Lab 441 JONES 348 T 2 3 2101:01 Physics/Lab 441 JONES 348 T 3 3 2101:01 Physics/Lab 441 JONES 348 T 4 3 2101:01 Physics/Lab 441 JONES 348 W 1 3 2101:01 Physics/Lab 441 JONES 348 W 2 3 2101:01 Physics/Lab 441 JONES 348 W 3 3 2101:01 Physics/Lab 441 JONES 348 W 4 3 2101:01 Physics/Lab 441 JONES 348 R 1 3 2101:01 Physics/Lab 441 JONES 348 R 2 3 2101:01 Physics/Lab 441 JONES 348 R 3 3 2101:01 Physics/Lab 441 JONES 348 R 4 3 2101:01 Physics/Lab 441 JONES 348 F 1 3 2101:01 Physics/Lab 441 JONES 348 F 2 3 2101:01 Physics/Lab 441 JONES 348 F 3 3 2101:01 Physics/Lab 441 JONES 348 F 4 3 I'm trying to avoid going line by line separating the data. I'm not well versed on the VBA functionality of Excel, but would like to get started using it. Any help would be greatly appreciated.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Virtual Lab part 2&ndash;Templates, Patterns, Baselines

    - by Geoff N. Hiten
    Once you have a good virtualization platform chosen, whether it is a desktop, server or laptop environment, the temptation is to build “X”.  “X” may be a SharePoint lab, a Virtual Cluster, an AD test environment or some other cool project that you really need RIGHT NOW.  That would be doing it wrong. My grandfather taught woodworking and cabinetmaking for twenty-seven years at a trade school in Alabama.  He was the first instructor hired at that school and the only teacher for the first two years.  His students built tables, chairs, and workbenches so the school could start its HVAC courses.   Visiting as a child, I also noticed many extra “helper” stands, benches, holders, and gadgets all built from wood.  What does that have to do with a virtual lab, you ask?  Well, that is the same approach you should take.  Build stuff that you will use.  Not for solving a particular problem, but to let the Virtual Lab be part of your normal troubleshooting toolkit. Start with basic copies of various Operating Systems.  Load and patch server and desktop OS environments.  This also helps build your collection of ISO files, another essential element of a virtual Lab.  Once you have these “baseline” images, you can use your Virtualization software’s snapshot capability to freeze the image.  Clone the snapshot and you have a brand new fully patched machine in mere moments.  You may have to sysprep some of the Microsoft OS environments if you are going to create a domain environment or experiment with clustering.  That is still much faster than loading and patching from scratch. So once you have a stock of raw materials (baseline images in this case) where should you start.  Again, my grandfather’s workshop gives us the answer.  In the shop it was workbenches and tables to hold large workpieces that made the equipment more useful.  In a Windows environment the same role falls to the fundamental network services:  DHCP, DNS, Active Directory, Routing, File Services, and Storage services.  Plan your internal network setup.  Build out an AD controller with all the features listed.  Make the actual domain an isolated domain so it will not care about where you take it.  Add the Microsoft iSCSI target.  Once you have this single system, you can leverage it for almost any network environment beyond a simple stand-alone system. Having these templates and fundamental infrastructure elements ready to run means I can build a quick lab in minutes instead of hours.  My solutions are well-tested, my processes fully documented with screenshots, and my plans validated well before I have to make any changes to client systems.  the work I put in is easily returned in increased value and client satisfaction.

    Read the article

  • Oracle Solaris Remote Lab (OSRL) Fact Sheet

    - by user13333379
    The Oracle Solaris Remote Lab allows independent software vendors (ISVs) to test and qualify their applications in a self service Solaris cloud. ISVs who are Oracle Partner Network Gold members with a specialization in the Solaris knowledge zone can apply for free access in OPN. The lab offers the following features to it's users: Lifetime of project: 45 days (extensions granted on demand)  Up to 5 virtual machines in a private network  Virtual Machine technology: Solaris zones  Resources per VM processor support: SPARC or x86  OS version: OracleSolaris 11.0 4GB physical memory  4GB swap space  10GB local filesystem storage  10GB network filesystem (NFS) mounted on all virtual machines Networking configuration The only external network routes are to Partner's other Virtual Machines  No network routing to the Internet  The SMB (CIFS) sharing protocol is not available between Virtual Machines  Device Access  Applications that assume the existence of /devices will not run in a Virtual Machine  Applications that use eeprom to modify SPARC eeprom setting will not run in a Virtual Machine The following utilities do not work properly in Virtual Machines:  add_drv, disks, prtconf, prtdiag, rem_dev Access technology: Secure Global Desktop, file up and download root access within VM Available VM templates (both processor architectures) Oracle Database 11g Release 2 (11.2.0.3) for Solaris with Oracle Enterprise Manager 11g Weblogic 12c  SAMP: Apache http server, PHP, MySQL, phpadmin on all templates and images: Oracle Solaris Studio 12.3 for application development  More resources: Online application for Oracle Solaris remote Lab Developer Webinar about the Oracle Solaris Remote Lab Everything an Oracle Solaris Developer needs...

    Read the article

  • Oracle Solaris Remote Lab (OSRL) Fact Sheet

    - by user13333379
    The Oracle Solaris Remote Lab allows independent software vendors (ISVs) to test and qualify their applications in a self service Solaris cloud. ISVs who are Oracle Partner Network Gold members with a specialization in the Solaris knowledge zone can apply for free access in OPN. The lab offers the following features to it's users: Lifetime of project: 45 days (extensions granted on demand)  Up to 5 virtual machines in a private network  Virtual Machine technology: Solaris zones  Resources per VM processor support: SPARC or x86  OS version: OracleSolaris 11.0 4GB physical memory  4GB swap space  10GB local filesystem storage  10GB network filesystem (NFS) mounted on all virtual machines Networking configuration The only external network routes are to Partner's other Virtual Machines  No network routing to the Internet  The SMB (CIFS) sharing protocol is not available between Virtual Machines  Device Access  Applications that assume the existence of /devices will not run in a Virtual Machine  Applications that use eeprom to modify SPARC eeprom setting will not run in a Virtual Machine The following utilities do not work properly in Virtual Machines:  add_drv, disks, prtconf, prtdiag, rem_dev Access technology: Secure Global Desktop, file up and download root access within VM Available VM templates (both processor architectures) Oracle Database 11g Release 2 (11.2.0.3) for Solaris with Oracle Enterprise Manager 11g Weblogic 12c  SAMP: Apache http server, PHP, MySQL, phpadmin on all templates and images: Oracle Solaris Studio 12.3 for application development  More resources: Online application for Oracle Solaris remote Lab Developer Webinar about the Oracle Solaris Remote Lab Everything an Oracle Solaris Developer needs...

    Read the article

  • Adventures in Lab Management Configuration: Part 3 of 3

    - by Enrique Lima
    This is long overdue.  But here it is. In the previous two sections I have discussed on how I got a CMMI v4.2 to take on the same fields as v5 and therefore be able to communicate with MTM and Lab Manager.  And that was quite a success. Yet when I opened up Lab Management while it was fully aware of the VMs being there, it refused to let me enroll them into an environment.  It kept stating there was no suitable host to deploy the VM to, error TF259115. This was an indication something was not matching the expected network configuration between TFS and Hyper-V/SCVMM. So, here are a couple of things that took place: Verified the network segment specified for network isolation matched what was configured physically for either DHCP or manually assigned IP addressing for the guest VMs Made sure TFS was fully aware of the configuration settings for the network location name.  For that I issued:  tfsconfig lab /settings /networklocation:”<name of the network location configured in SCVMM” On that last item, that was key to making sure Lab Management communicated with the VMs and for it to allow enrollment into the new Virtual Environment.

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • Windows Phone Camp Hands On Accelerator Lab in Dallas

    - by Bill Osuch
    Microsoft is hosting another Windows Phone Accelerator lab this December 13-15 in Dallas: Do you have the next million dollar idea that you just can’t find the time to finish?  Do you already have an app for Android and iPhone that you want to expand into new markets?    It’s time to turn your napkin sketches and leverage your hard work into real, sellable apps for Windows Phone in ONE WEEK!  Join us for a special Windows Phone event you don’t want to miss - Windows Phone Accelerator!    In this 3-day developer retreat, we will have experts on hand to help you build, test, pitch, and deploy your app into the Windows Phone Marketplace.  You will have hands on technical assistance, Marketplace subscriptions, and developer phones for testing.    It’s a great chance to get step-by-step advice from Microsoft and community experts and all you have to do is bring your existing app or app idea that you are ready to build.   Seating is limited and registration is not guaranteed.  Get your spot today!   Agenda: Tuesday, 9am-5pm Kick-off Open Lab; 1:1 Meetings   Wednesday, 8:30am-6pm Open Lab; 1:1 Meetings   Thursday, 8:30am-1pm Open Lab; 1:1 Meetings   Thursday, 1pm - 3pm App pitches & Giveaways Register at this link

    Read the article

  • Oracle Solaris 11.1 Security Lab

    - by user12608073
    Recently I developed a set of lab exercises for an Oracle OpenWorld Hands On Lab, entitled HOL10201, Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications. This explored the new Extended Policy for privilege assignments in Oracle Solaris 11.1.  Today, Oracle Solaris 11.1 has been officially released via the Package Repository. Today's release and branch are numbered 0.5.11-0.175.1.0.0.24.2, which means it is based on build 24b of 11.1 which is, in turn, based on build 175a of 11.0.  There is a good summary of new features available here: Oracle Solaris 11.1 - What's New . Pages 5 thru 7 give an overview of some of the new security enhancements. There is much more information available in the newly published documentation for Oracle Solaris 11.1. I plan to explore some of these enhancements in a series of blog entries. Meanwhile, I've published a copy of the lab materials, which you can try out with this new release.

    Read the article

  • Partner Lab Program

    - by [email protected]
    Il 1 Giugno è stato pubblicato il Nuovo calendario del PARTNER LAB PROGRAM.Sul sito WEB troverai grandi novità: abbiamo rivisto le agende e i contenuti di tutti i seminari e arricchito la nostra offerta con molti WEBSEMINAR della durata di 1 ora e mezza circa.Non perdere l'opportunità di rimanere sempre aggiornato su tutti i prodotti che compongono l'offerta Tecnologica di Oracle, visita subito il sito del Partner Lab e iscriviti subito ai Seminari in aula e ai Webseminar di tuo interesse.

    Read the article

  • UPK Basics Hands On Lab at Oracle Open World Latin America

    - by user581320
    Orrcle Open World Latin America 2012 will be in Sao Paulo, Brazil December fourth through the sixth. There's so much to see and learn from at Oracle OpenWorld : keynotes, technical sessions, Oracle and partner demonstrations, hands-on labs, networking events, and more.  I will be presenting a hands-on lab at the show this year, Introduction to Oracle User Productivity Kit - Learn the Basics in the afternoon on Tuesday December 4th.  This nonstop one hour lab covers topics from Getting Started with UPK to the basics of creating an outline, some typical content and concluding with publishing some of the many outputs UPK is capable of.   If you are planning on attending the show, come by the lab and see what UPK is all about.  I’ll be in Sao Paulo all week to fulfill my need to extend California’s summer by another week (trip bonus) and to meet and discuss all things UPK with our customers and partners.  If you’re not registered for the show there is still time. Check out the Oracle Open World Latin America 2012 web site for all the details. I look forward to seeing you in Sao Paulo!  Peter Maravelias Principal Product Strategy Manager, Oracle UPK 

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • Announcing the Mastering SharePoint 2013 Development lab

    - by Erwin van Hunen
    If you’re a seasoned SharePoint developer and you’d like to get up and running with all the new goodies that SharePoint 2013 is bringing, make sure you check out the Mastering SharePoint 2013 Development lab I’m giving at LabCenter in Stockholm, Sweden. 3 days of development heaven *and* you take away a brand new laptop, or an iPad, or some of the other perks you decide to go for. Check out: http://www.labcenter.se/Labs#lab=Mastering_Sharepoint_2013_Development The overview of the 3 days: Day 1 Module 1: Comparing SharePoint 2013 to SharePoint 2010 What’s new in SharePoint 2013 Module 2: Installing your SharePoint 2013 development environment How to successfully (and above all correctly) install SharePoint 2013 Day 2 Module 3: Apps, sandboxed or full trust? What’s the difference between the deployment models. Pro’s and con’s Code or no-code solutions? Module 4: Search is the new black Using the new out of the box Search webparts Building a search based solution Day 3 Module 5: Workflows Differences between SharePoint 2010 workflows and 2013 workflows Building a workflow using Visio and SharePoint Designer Building a workflow using Visual Studio Module 6: You’re the master of the design The design manager Master pages Page layouts CSS and HTML5

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • Parallel Computing Platform Developer Lab

    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PC to USB transfer slow

    - by Vipin Ms
    I'm having trouble with USB transfer,not with external hard disk. Transfer starts with like, for the transfer of 700MB file it starts with 30mb/s and towards the end it stops at 0s and stays put for like 3-4 mins to transfer the last bit. I have tried different USB devices, but no luck. Is it a bug? Another important point is, in Kubuntu there is no such issue. So is it something related to Gnome? I'm using Ubuntu 11.10 64bit. Somebody please help, it's really annoying. Here are the details. PC all of my drives are in ext4. USB I tried ext3,ntfs and fat32. All having the same problem. Here are my USB controllers details: root@LAB:~# lspci|grep USB 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) Here is an example of one transfer. I connected one of my 4GB usb device. Nov 24 12:01:25 LAB kernel: [ 1175.082175] userif-2: sent link up event. Nov 24 12:01:25 LAB kernel: [ 1695.684158] usb 2-2: new high speed USB device number 3 using ehci_hcd Nov 24 12:01:25 LAB mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Nov 24 12:01:26 LAB mtp-probe: bus: 2, device: 3 was not an MTP device Nov 24 12:01:26 LAB kernel: [ 1696.132680] usbcore: registered new interface driver uas Nov 24 12:01:26 LAB kernel: [ 1696.142528] Initializing USB Mass Storage driver... Nov 24 12:01:26 LAB kernel: [ 1696.142919] scsi4 : usb-storage 2-2:1.0 Nov 24 12:01:26 LAB kernel: [ 1696.143146] usbcore: registered new interface driver usb-storage Nov 24 12:01:26 LAB kernel: [ 1696.143150] USB Mass Storage support registered. Nov 24 12:01:27 LAB kernel: [ 1697.141657] scsi 4:0:0:0: Direct-Access SanDisk U3 Cruzer Micro 8.02 PQ: 0 ANSI: 0 CCS Nov 24 12:01:27 LAB kernel: [ 1697.168827] sd 4:0:0:0: Attached scsi generic sg2 type 0 Nov 24 12:01:27 LAB kernel: [ 1697.169262] sd 4:0:0:0: [sdb] 7856127 512-byte logical blocks: (4.02 GB/3.74 GiB) Nov 24 12:01:27 LAB kernel: [ 1697.169762] sd 4:0:0:0: [sdb] Write Protect is off Nov 24 12:01:27 LAB kernel: [ 1697.169767] sd 4:0:0:0: [sdb] Mode Sense: 45 00 00 08 Nov 24 12:01:27 LAB kernel: [ 1697.171386] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.171391] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.173503] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.173510] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.175337] sdb: sdb1 After that I initiated one transfer. lsof -p 3575|tail -2 mv 3575 root 3r REG 8,8 1719599104 4325379 /media/Misc/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi mv 3575 root 4w REG 8,17 1046347776 15 /media/SREE/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi Here are the total time spent on that transfer. root@LAB:/media/SREE# time mv /media/Misc/The\ Tree\ of\ Life\ \(2011\)\ DVDRip\ XviD-MAXSPEED/ /media/SREE/ real 11m49.334s user 0m0.008s sys 0m5.260s root@LAB:/media/SREE# df -T|tail -2 /dev/sdb1 vfat 3918344 1679308 2239036 43% /media/SREE /dev/sda8 ext4 110110576 60096904 50013672 55% /media/Misc Do you think this is normal?? Approximately 12 minutes for 1.6Gb transfer? Thanks.

    Read the article

  • Mirroring Ubuntu on several systems in a computer lab

    - by Harvey Steck
    I am working in a new refugee school where the only Internet service available is a slow satellite connection. We are about to set up a computer lab (already have desktop systems and am about to install Ubuntu on them). I'm a newbie when it comes to Linux, but it seems a better alternative than pirated copies of Windows. I'd like to set up one Ubuntu system, and then mirror that system on perhaps ten to twenty other systems (all of which would be on an ethernet network). I expect to have an internet connection on the one system that I set up, but then it may be difficult to have enough bandwidth to go through all the same steps on the other ten systems. Can I set up the other ten or twenty computers to get all of their updates/upgrades/configuration from one master system? Can I also set things up so that students cannot change the configuration, install new programs, etc.? Appreciate any help you can give. -- Harvey

    Read the article

  • Computer Lab School for Orphans

    - by Brendon
    I am helping out an NGO, called Orphans Found Fund, here in Arusha Tanzania setup a computer lab to teach students about Ubuntu and open source applications. I have installed Ubuntu 10.10 on all the systems. What I'm wondering about is how to tweak the systems so that the kids cannot: Delete or alter system files Alter the system settings Add or remove applications Exceed a time limit (like an Internet Cafe) Also as the administrator I would like to monitor the usage for another system to make sure that abuse of network is not taking place. Any advice is much appreciated. Brendon

    Read the article

  • Parallel Computing Platform Developer Lab

    - by Daniel Moth
    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools including the Parallel Tasks and Parallel Stacks debugger toolwindows, and the Concurrency Visualizer in the profiler. Opportunities for attendees include: Gain expert design assistance with your Parallel Computing Platform based solution. Develop a solution prototype in collaboration with Microsoft Software Engineers. Attend topical presentations and “chalk-talk” sessions. Your team will be assigned private, secure offices for confidential collaboration activities. The event has limited capacity, thus enrollment is based on an application process.   Please download and complete the application form then return it to the event management team per instructions included within the form.  Applications will be evaluated based upon the technical solution scenario along with indicated project readiness timelines.  Microsoft event management team members may contact you directly for additional clarification and discussion of your project scenario during the nomination process." Comments about this post welcome at the original blog.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >