Search Results

Search found 22845 results on 914 pages for 'software deployment'.

Page 243/914 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • What software should i use to work with Ruby?

    - by paranoid_pedlar
    I've only ever learned to "program" with notepad when I was learning to create batch files in Windows. I would appreciate if you can explain to me the benefits of working with an IDE over a text editor or vice versa. Please don't give emotional responses or argue someone else's suggestion/advice. I use Linux primarily. Also, I would like to be able to make programs that can be used in Windows, but programmed in Linux. Thanks.

    Read the article

  • What is the perfect skill set for a software engineer? [closed]

    - by Sergey
    Of course, except technology stack. I'm asking about more fundamental skills such as design patterns or math. POSSIBLE DUPLICATES: http://stackoverflow.com/questions/76364/what-is-the-single-most-effective-thing-you-did-to-improve-your-programming-skill http://stackoverflow.com/questions/132798/what-should-every-programmer-know http://stackoverflow.com/questions/1177724/what-soft-skills-make-a-great-programmer

    Read the article

  • Why is UDP + a software reliable ordering system faster than TCP?

    - by Ricket
    Some games today use a network system that transmits messages over UDP, and ensures that the messages are reliable and ordered. For example, RakNet is a popular game network engine. It uses only UDP for its connections, and has a whole system to ensure that packets can be reliable and ordered if you so choose. My basic question is, what's up with that? Isn't TCP the same thing as ordered, reliable UDP? What makes it so much slower that people have to basically reinvent the wheel?

    Read the article

  • Installing WebDeploy Service

    - by TheBlueSky
    I'm stuggling with installing WebDeploy service on my development PC (Windows 7 Professional with IIS 7.5). Can someone give me the steps required to do that. Update: I installed the Web Deployment Tool from WPI and verified that the web deployment service is running (I believe it's "Web Deployment Agent Service"), however, when I open IIS Manager I can't see the Management Service; as per my understanding it's were I configure web deployment. If I'm missing something may be someone can tell me what it is and what to do next. Thanks in advance, TheBlueSky

    Read the article

  • R software : How to extract values from rasterstack with xy coordinates?

    - by Eddie
    I have a rasterstack(5 raster layers) that actually is a time series raster. r <- raster(nrow=20, ncol=200) s <- stack( sapply(1:5, function(i) setValues(r, rnorm(ncell(r), i, 3) )) ) > s class : RasterStack dimensions : 20, 200, 4000, 5 (nrow, ncol, ncell, nlayers) resolution : 1.8, 9 (x, y) extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0 names : layer.1, layer.2, layer.3, layer.4, layer.5 min values : -9.012146, -9.165947, -9.707269, -7.829763, -5.332007 max values : 11.32811, 11.97328, 15.99459, 15.66769, 16.72236 My objective is to plot each pixel and explore their behavior over time. How could I extract each pixels together with their x,y coordinates and plot a time series curve?

    Read the article

  • What books should be read to become an excelent software architect.

    - by dev-cu
    I know there are good books to start with, but let's make a short list of 3-5 books, i don't think only one book could help you in the way of becoming a good architect. I read some time ago that an architect takes years to have the knowledge and the ability to make good decisions. Suppose someone has the potential, what is the way? Please, answers with books that doesn't repeat over the same theme, making your learning process widely. One short list per answer. Vote up that you think are the best ones. Thanks.

    Read the article

  • What software or service can I use to programatically make phone calls with?

    - by Jason
    I'm looking to programatically make phone call reminders to customers based upon their opt-in requests. I am NOT a telemarketer. I need to make a phone call, and play a message. I need to leave a message after the beep if an answering machine or voicemail is detected. I need to know if the message was successfully delivered. Ideally, I could offer the user feedback by pressing a button and recording their selection. I prefer Windows and .NET but would consider anything. What do you suggest?

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Update packages on very old ubuntu

    - by meewoK
    I want to add Mysqli support to a machine running: Server Version: Apache/2.2.4 (Ubuntu) PHP/5.2.3-1ubuntu6.3 I would rather not update more things then I need to. I run the following: sudo apt-get install php5-mysql However, as the ubuntu version is old I get the following. WARNING: The following packages cannot be authenticated! php5-cli php5-mysql php5-mhash php5-xsl php5-pspell php5-snmp php5-curl php5-xmlrpc php5-sqlite php5-gd libapache2-mod-php5 php5-common Install these packages without verification [y/N]? Y Err http://gr.archive.ubuntu.com gutsy-updates/main php5-cli 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-cli 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-mysql 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-mhash 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-xsl 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-pspell 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-snmp 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-curl 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-xmlrpc 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-sqlite 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-gd 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main libapache2-mod-php5 5.2.3-1ubuntu6.4 404 Not Found Err http://security.ubuntu.com gutsy-security/main php5-common 5.2.3-1ubuntu6.4 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-cli_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-mysql_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-mhash_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-xsl_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-pspell_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-snmp_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-curl_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-xmlrpc_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-sqlite_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-gd_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/libapache2-mod-php5_5.2.3-1ubuntu6.4_i386.deb 404 Not Found Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/php5/php5-common_5.2.3-1ubuntu6.4_i386.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Questions Can I add mysqli feature using another method instead of sudo-apt get? Even if successful can this break something on the system? Update: I have tried to add additional sources using the instructions from: http://superuser.com/questions/339537/where-can-i-get-therepositories-for-old-ubuntu-versions I have the following in the /etc/apt/sources.list file: # deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted #deb cdrom:[Ubuntu-Server 7.10 _Gutsy Gibbon_ - Release i386 (20071016)]/ gutsy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates main restricted deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## universe WILL NOT receive any review or updates from the Ubuntu security ## team. deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates universe deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. #deb http://gr.archive.ubuntu.com/ubuntu/ gutsy multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy multiverse deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates multiverse deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse # deb-src http://gr.archive.ubuntu.com/ubuntu/ gutsy-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu gutsy partner # deb-src http://archive.canonical.com/ubuntu gutsy partner deb http://security.ubuntu.com/ubuntu gutsy-security main restricted deb-src http://security.ubuntu.com/ubuntu gutsy-security main restricted deb http://security.ubuntu.com/ubuntu gutsy-security universe deb-src http://security.ubuntu.com/ubuntu gutsy-security universe deb http://security.ubuntu.com/ubuntu gutsy-security multiverse deb-src http://security.ubuntu.com/ubuntu gutsy-security multiverse # Required deb http://old-releases.ubuntu.com/ubuntu/gutsy main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/gutsy-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/gutsy-security main restricted universe multiverse

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Use Autoruns to Manually Clean an Infected PC

    - by Mark Virtue
    There are many anti-malware programs out there that will clean your system of nasties, but what happens if you’re not able to use such a program?  Autoruns, from SysInternals (recently acquired by Microsoft), is indispensable when removing malware manually. There are a few reasons why you may need to remove viruses and spyware manually: Perhaps you can’t abide running resource-hungry and invasive anti-malware programs on your PC You might need to clean your mom’s computer (or someone else who doesn’t understand that a big flashing sign on a website that says “Your computer is infected with a virus – click HERE to remove it” is not a message that can necessarily be trusted) The malware is so aggressive that it resists all attempts to automatically remove it, or won’t even allow you to install anti-malware software Part of your geek credo is the belief that anti-spyware utilities are for wimps Autoruns is an invaluable addition to any geek’s software toolkit.  It allows you to track and control all programs (and program components) that start automatically with Windows (or with Internet Explorer).  Virtually all malware is designed to start automatically, so there’s a very strong chance that it can be detected and removed with the help of Autoruns. We have covered how to use Autoruns in an earlier article, which you should read if you need to first familiarize yourself with the program. Autoruns is a standalone utility that does not need to be installed on your computer.  It can be simply downloaded, unzipped and run (link below).  This makes is ideally suited for adding to your portable utility collection on your flash drive. When you start Autoruns for the first time on a computer, you are presented with the license agreement: After agreeing to the terms, the main Autoruns window opens, showing you the complete list of all software that will run when your computer starts, when you log in, or when you open Internet Explorer: To temporarily disable a program from launching, uncheck the box next to it’s entry.  Note:  This does not terminate the program if it is running at the time – it merely prevents it from starting next time.  To permanently prevent a program from launching, delete the entry altogether (use the Delete key, or right-click and choose Delete from the context-menu)).  Note:  This does not remove the program from your computer – to remove it completely you need to uninstall the program (or otherwise delete it from your hard disk). Suspicious Software It can take a fair bit of experience (read “trial and error”) to become adept at identifying what is malware and what is not.  Most of the entries presented in Autoruns are legitimate programs, even if their names are unfamiliar to you.  Here are some tips to help you differentiate the malware from the legitimate software: If an entry is digitally signed by a software publisher (i.e. there’s an entry in the Publisher column) or has a “Description”, then there’s a good chance that it’s legitimate If you recognize the software’s name, then it’s usually okay.  Note that occasionally malware will “impersonate” legitimate software, but adopting a name that’s identical or similar to software you’re familiar with (e.g. “AcrobatLauncher” or “PhotoshopBrowser”).  Also, be aware that many malware programs adopt generic or innocuous-sounding names, such as “Diskfix” or “SearchHelper” (both mentioned below). Malware entries usually appear on the Logon tab of Autoruns (but not always!) If you open up the folder that contains the EXE or DLL file (more on this below), an examine the “last modified” date, the dates are often from the last few days (assuming that your infection is fairly recent) Malware is often located in the C:\Windows folder or the C:\Windows\System32 folder Malware often only has a generic icon (to the left of the name of the entry) If in doubt, right-click the entry and select Search Online… The list below shows two suspicious looking entries:  Diskfix and SearchHelper These entries, highlighted above, are fairly typical of malware infections: They have neither descriptions nor publishers They have generic names The files are located in C:\Windows\System32 They have generic icons The filenames are random strings of characters If you look in the C:\Windows\System32 folder and locate the files, you’ll see that they are some of the most recently modified files in the folder (see below) Double-clicking on the items will take you to their corresponding registry keys: Removing the Malware Once you’ve identified the entries you believe to be suspicious, you now need to decide what you want to do with them.  Your choices include: Temporarily disable the Autorun entry Permanently delete the Autorun entry Locate the running process (using Task Manager or similar) and terminating it Delete the EXE or DLL file from your disk (or at least move it to a folder where it won’t be automatically started) or all of the above, depending upon how certain you are that the program is malware. To see if your changes succeeded, you will need to reboot your machine, and check any or all of the following: Autoruns – to see if the entry has returned Task Manager (or similar) – to see if the program was started again after the reboot Check the behavior that led you to believe that your PC was infected in the first place.  If it’s no longer happening, chances are that your PC is now clean Conclusion This solution isn’t for everyone and is most likely geared to advanced users. Usually using a quality Antivirus application does the trick, but if not Autoruns is a valuable tool in your Anti-Malware kit. Keep in mind that some malware is harder to remove than others.  Sometimes you need several iterations of the steps above, with each iteration requiring you to look more carefully at each Autorun entry.  Sometimes the instant that you remove the Autorun entry, the malware that is running replaces the entry.  When this happens, we need to become more aggressive in our assassination of the malware, including terminating programs (even legitimate programs like Explorer.exe) that are infected with malware DLLs. Shortly we will be publishing an article on how to identify, locate and terminate processes that represent legitimate programs but are running infected DLLs, in order that those DLLs can be deleted from the system. Download Autoruns from SysInternals Similar Articles Productive Geek Tips Using Autoruns Tool to Track Startup Applications and Add-onsHow To Get Detailed Information About Your PCSUPERAntiSpyware Portable is the Must-Have Spyware Removal Tool You NeedQuick Tip: Windows Vista Temp Files DirectoryClear Recent Commands From the Run Dialog in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • Get Application Title from Windows Phone

    - by psheriff
    In a Windows Phone application that I am currently developing I needed to be able to retrieve the Application Title of the phone application. You can set the Deployment Title in the Properties of your Windows Phone Application, however getting to this value programmatically can be a little tricky. This article assumes that you have Visual Studio 2010 and the Windows Phone tools installed along with it. The Windows Phone tools must be downloaded separately and installed with Visual Studio2010. You may also download the free Visual Studio2010 Express for Windows Phone developer environment. The WMAppManifest.xml File First off you need to understand that when you set the Deployment Title in the Properties windows of your Windows Phone application, this title actually gets stored into an XML file located under the \Properties folder of your application. This XML file is named WMAppManifest.xml. A portion of this file is shown in the following listing. <?xml version="1.0" encoding="utf-8"?><Deployment  http://schemas.microsoft.com/windowsphone/2009/deployment"http://schemas.microsoft.com/windowsphone/2009/deployment"  AppPlatformVersion="7.0">  <App xmlns=""       ProductID="{71d20842-9acc-4f2f-b0e0-8ef79842ea53}"       Title="Mobile Time Track"       RuntimeType="Silverlight"       Version="1.0.0.0"       Genre="apps.normal"       Author="PDSA, Inc."       Description="Mobile Time Track"       Publisher="PDSA, Inc."> ... ...  </App></Deployment> Notice the “Title” attribute in the <App> element in the above XML document. This is the value that gets set when you modify the Deployment Title in your Properties Window of your Phone project. The only value you can set from the Properties Window is the Title. All of the other attributes you see here must be set by going into the XML file and modifying them directly. Note that this information duplicates some of the information that you can also set from the Assembly Information… button in the Properties Window. Why Microsoft did not just use that information, I don’t know. Reading Attributes from WMAppManifest I searched all over the namespaces and classes within the Windows Phone DLLs and could not find a way to read the attributes within the <App> element. Thus, I had to resort to good old fashioned XML processing. First off I created a WinPhoneCommon class and added two static methods as shown in the snippet below: public class WinPhoneCommon{  /// <summary>  /// Returns the Application Title   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application title</returns>  public static string GetApplicationTitle()  {    return GetWinPhoneAttribute("Title");  }   /// <summary>  /// Returns the Application Description   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application description</returns>  public static string GetApplicationDescription()  {    return GetWinPhoneAttribute("Description");  }   ... GetWinPhoneAttribute method here ...} In your Windows Phone application you can now simply call WinPhoneCommon.GetApplicationTitle() or WinPhone.GetApplicationDescription() to retrieve the Title or Description properties from the WMAppManifest.xml file respectively. You notice that each of these methods makes a call to the GetWinPhoneAttribute method. This method is shown in the following code snippet: /// <summary>/// Gets an attribute from the Windows Phone WMAppManifest.xml file/// To use this method, add a reference to the System.Xml.Linq DLL/// </summary>/// <param name="attributeName">The attribute to read</param>/// <returns>The Attribute's Value</returns>private static string GetWinPhoneAttribute(string attributeName){  string ret = string.Empty;   try  {    XElement xe = XElement.Load("WMAppManifest.xml");    var attr = (from manifest in xe.Descendants("App")                select manifest).SingleOrDefault();    if (attr != null)      ret = attr.Attribute(attributeName).Value;  }  catch  {    // Ignore errors in case this method is called    // from design time in VS.NET  }   return ret;} I love using the new LINQ to XML classes contained in the System.Xml.Linq.dll. When I did a Bing search the only samples I found for reading attribute information from WMAppManifest.xml used either an XmlReader or XmlReaderSettings objects. These are fine and work, but involve a little extra code. Instead of using these, I added a reference to the System.Xml.Linq.dll, then added two using statements to the top of the WinPhoneCommon class: using System.Linq;using System.Xml.Linq; Now, with just a few lines of LINQ to XML code you can read to the App element and extract the appropriate attribute that you pass into the GetWinPhoneAttribute method. Notice that I added a little bit of exception handling code in this method. I ignore the exception in case you call this method in the Loaded event of a user control. In design-time you cannot access the WMAppManifest file and thus an exception would be thrown. Summary In this article you learned how to retrieve the attributes from the WMAppManifest.xml file. I use this technique to grab information that I would otherwise have to hard-code in my application. Getting the Title or Description for your Windows Phone application is easy with just a little bit of LINQ to XML code. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get Application Title from Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • How to add an explorer context menu for a file extension ?

    - by asksuperuser
    I inspired from my other question here http://superuser.com/questions/129755/how-to-get-open-command-window-here-while-clicking-on-exe-not-on-parent-folder and tried this Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Classes\.sln] [HKEY_CURRENT_USER\Software\Classes\.sln\shell] [HKEY_CURRENT_USER\Software\Classes\.sln\shell\countlines] @="countlines" [HKEY_CURRENT_USER\Software\Classes\.sln\shell\countlines\command] @="c:\\countlines\\countlines.exe" %1 but it doesn't work.

    Read the article

  • Smart Grid Gurus

    - by caroline.yu
    Join Paul Fetherland, AMI director at Hawaiian Electric Company (HECO) and Keith Sturkie, vice president of Information Technology, Mid-Carolina Electric Cooperative (MCEC) on Thursday, April 29 at 12 p.m. EDT for the free "Smart Grid Gurus" Webcast. In this Webcast, underwritten by Oracle Utilities, Intelligent Utility will profile Paul Fetherland and Keith Sturkie to examine how they ended up in their respective positions and how they are making smarter grids a reality at their companies. By attending, you will: Gain insight from the paths taken and lessons learned by HECO and MCEC as these two utilities add more grid intelligence to their operations Identify the keys to driving AMI deployment, increasing operational and productivity gains, and targeting new goals on the technology roadmap Learn why HECO is taking a careful, measured approach to AMI deployment, and how Hawaii's established renewable portfolio standard of 40% and an energy efficiency standard of 30%, both by 2030, impact its efforts Discover how MCEC's 45,000-meter AMI deployment, completed in 2005, reduced field trips for high-usage complaints by 90% in the first year, and MCEC's immediate goals for future technology implementation To register, please follow this link.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >