Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 135/456 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • What skills should a developer/tester learn in order to move into a permanent Systems Analysis role?

    - by shenaz
    I have been with a software services firm for 5 years and have fallen into a "jack of all trades" role, which I am looking to move out of. I've spent about 1 year each in programming (VB/VB.NET), application support, systems analysis, and most recently, software testing, which in my current position is all manual. I've really lost interest in the programming and testing roles; I would prefer a position where I get to work more with people, such as systems analysis. I even got a chance to be a trainer at the same company for a few months, a temporary position which I enjoyed very much. Given that most of my real experience is with software, support, and testing, what knowledge areas and skills should I focus on learning and mastering in order to make myself an attractive candidate for a permanent position as a business/systems analyst?

    Read the article

  • Test iPhone app on iPad mini?

    - by Devfly
    I have developed an iPhone app, right now I only need a device for testing. I have 300$, and two choices - second hand iPhone 4, or brand new iPad mini. The better choice obviously is the iPad, but is it sufficient for testing iPhone apps on? On the iPad, iPhone apps can run just fine in 2X mode, but are there any differences between the app performance on iPhone and iPad (except the chipset). Should I test my app on actual iPhone, or the iPad will suffice? My app is RSS reader, not some game, so I think everything will be fine with testing on iPad mini. If I buy the iPad I will find some friends iPhone 4/3gs running iOS 5.1 (because my app's deployment target is 5.1, and the iPad comes with 6.0), but of course I can't extensively test on this iPhone. Thank you!

    Read the article

  • Oracle University Partner Enablement Update (19th March)

    - by swalker
    Java SE 7 Certification News The following exam has recently gone into Production: Exam Title and Code Certification Track Upgrade to Java SE 7 Programmer (1Z0-805) Oracle Certified Professional, Java SE 7 Programmer Full preparation details are available on the exam page, including prerequisites for this certification, exam topics and pricing. Exams can be taken at an Oracle Test Center near you or at any Pearson VUE Testing Center. The following exam has recently become available for beta testing: Exam Code and Title Certification Track Java SE 7 Programmer II (1Z1-804) Oracle Certified Professional, Java SE 7 Programmer Full preparation details are available on the exam page, including prerequisites for this certification, exam topics and pricing. A beta exam offers you two distinct advantages: you will be one of the first to get certified you pay a lower price. Beta exams can be taken at any Pearson VUE Testing Center. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook

    Read the article

  • What's the best way to move to linux from windows for web development ?

    - by rajesh pillai
    I am primarily a programmer developing on windows based OS using c# as my primary language. I am evaluating Ubuntu Linux as an alternate platform and would like to know the best stack for doing web development on this. I had gone through the following thread Moving development from Windows to Linux but it doesn't answer my questions fully. Some of the points I am interested are outlined below PHP/Ruby/Python (What would you recommend?) Is Mono mature enough for any large scale development? Has anyone any real experience using Mono. IDE (including debugging support, intellisense, source control integration,Unit testing) Unit testing framework based on the language recommended Web framework if any. Load Testing tools Web server (I know there are many webservers, but would like to know which one is primarily used by most people) Your inputs is greatly appreciated. Thanks.

    Read the article

  • Automation at GUI or API Level in Scrum

    - by Sani Parwani
    I am a Automation Engineer. I use QTP for Automation. I wanted to know couple of things. In a scrum Project which has 2 weeks of work, how can complete automation be done in that time frame (talking only about the GUI Level)? Similarly, how can API Level of automated testing be accomplished, especially inside a single sprint? And what exactly is API level testing? How to begin with API Testing? I assume QTP is not the tool here certainly.

    Read the article

  • Issue with dynamic Quicklist in Unity

    - by costales
    I would like to add a Quicklist to Gufw app, but it isn't working. The code is here (you can install reading the INSTALL file): http://bazaar.launchpad.net/~gufw-developers/gui-ufw/testing/files/3 I added lines 52-54 to the view (a simple example) from the official API web: http://bazaar.launchpad.net/~gufw-developers/gui-ufw/testing/view/head:/gufw/view/gufw.py https://wiki.ubuntu.com/Unity/LauncherAPI self.launcher = Unity.LauncherEntry.get_for_desktop_id ("gufw.desktop") self.launcher.set_property("progress", 0.42) self.launcher.set_property("progress_visible", True) But nothing happen. But if I run this file with Gufw running: http://bazaar.launchpad.net/~gufw-developers/gui-ufw/testing/view/head:/gufw/test_launcher.py $ python test_launcher.py The progress bar appears! :/ I don't know what am I missing? :P Any idea? Thanks in advance! The environment is Ubuntu 13.04.

    Read the article

  • Utility Objects–Waitfor Delay Coordinator (SQL Server 2008+)

    - by drsql
    Finally… took longer than I had expected when I wrote this a while back, but I had to move my website and get DNS moved before I could post code… When I write code, I do my best to test that code in as many ways as necessary. One of the last types of tests that is necessary is concurrency testing. Concurrency testing is one of the most difficult types of testing because it takes running multiple processes simultaneously and making sure that you get the correct answers multiple times. This is really...(read more)

    Read the article

  • September issue of the Enterprise Manager Indepth Newsletter

    - by Javier Puerta
    The September issue of the Enterprise Manager Indepth Newsletter is now available here  Featured articles include: Oracle OpenWorld Preview: Don't-Miss Sessions, Hands-on Labs, and MoreBecause of the rapid and widespread adoption of Oracle Enterprise Manager 12c since its launch at Oracle OpenWorld 2011, conference organizers are expecting Oracle Enterprise Manager sessions to attract record crowds at Oracle OpenWorld 2012. Read More Oracle Cloud Builder Summit—Zero to Enterprise Cloud in Two HoursIn August, Oracle launched the worldwide Oracle Cloud Builder Summit series, an event where attendees learn firsthand how to plan, deploy, and manage an enterprise private cloud using Oracle Enterprise Manager 12c—all in a few hours. Read More WEBCASTS Reduce Database Testing Efforts While Maximizing ROIWatch this on-demand Webcast demonstrating how to manage database and system changes with confidence using Oracle Real Application Testing. Viewers will be among the first to hear results from Forrester Consulting's commissioned, multicustomer study, “Total Economic Impact of Oracle Real Application Testing.”

    Read the article

  • Try/Catch or test parameters

    - by Ondra Morský
    I was recently on a job interview and I was given a task to write simple method in C# to calculate when the trains meet. The code was simple mathematical equation. What I did was that I checked all the parameters on the beginning of the method to make sure, that the code will not fail. My question is: Is it better to check the parameters, or use try/catch? Here are my thoughts: Try/catch is shorter Try/catch will work always even if you forget about some condition Catch is slow in .NET Testing parameters is probably cleaner code (Exceptions should be exceptional) Testing parameters gives you more control over return values I would prefer testing parameters in methods longer than +/- 10 lines, but what do you think about using try/catch in simple methods just like this – i.e. return (a*b)/(c+d); There are many similar questions on stackexchnage, but I am interested in this particular scenario.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Django "comment_was_flagged" signal

    - by tsoporan
    Hello, This is my first time working with django signals and I would like to hook the "comment_was_flagged" signal provided by the comments app to notify me when a comment is flagged. This is my code, but it doesn't seem to work, am I missing something? from django.contrib.comments.signals import comment_was_flagged from django.core.mail import send_mail def comment_flagged_notification(sender, **kwargs): send_mail('testing moderation', 'testing', 'test@localhost', ['[email protected]',]) comment_was_flagged.connect(comment_flagged_notification) (I am just testing the email for now, but I have assured the email is sending properly.) Thanks!

    Read the article

  • Test Case Design and Responsibility

    - by Sakamoto Kazuma
    So it seems like a lot of people are playing the blame game around where I work, and it brings up an interesting question. Knowns: Requirements team writes requirements for product. Developers create their own unit tests out of requirements. Testing team creates their general tests out of requirements and past customer issues. Product released if and only if X% of testcases from Testing team passes Customer response team gets bugs from the field, and lets the testing team know about these issues. Question: If the customer ends up filing a lot of defects, who is to blame? Is it the Testing team for not covering those? Or is it the requirements team for not writing better requirements? And how does one improve upon the system?

    Read the article

  • codeigniter and OOP general question regarding calling functions, and the parent constuctor

    - by thrice801
    Ok so I have some gaps in my understanding of PHP OOP, classes and functions specifically, this whole constructor class deal. I use both Zend and CI but right now Im trying to figure this out in CI as it is less complicated. So all Im trying to do is understand how to call a function from a view page in code igniter. I understand that might go against MVC but Im working with an api and search results not from my database, so basically, I want to define a function in my class that I am able to call in one of my view pages.. and I keep getting "Fatal error: Call to undefined function functionname" error no matter what I try. I thought I just had to declare public function testing() { echo "testing testing 123; } but calling that from the view I get that error. Then I read something about having to go parent::Controller(); in the index of the class where the testing function also resides? But that didnt work either. Anyways, ya, can someone explain what I need to do in order to call the "testing()" function on one of my view pages? and clarification on the constructor class and what exactly parent::Controller() even does, would be much appreciated as well.

    Read the article

  • Run a universal app as a 'legacy' iPhone app on an iPod

    - by Paul Alexander
    I do most development testing on my iPad. When I test an iPhone app, it runs in 'compatibility' mode where the little iPhone app runs in a small window or x2 magnification. Now that I've created a universal app it runs as a native iPad app. For testing I'd like to use the simulated iPhone when I don't have an iPhone handy for testing. How can I build the project so that the iPad will run the app in compatibility mode?

    Read the article

  • Outlook VBA - Find & Replace Incoming Emails

    - by user1912198
    Good morning everyone at Stackoverflow, I am trying to find a VBA script that finds and replaces a certain text in incoming e-mails. So far i've been unable to find such a script that is working. I found several scripts to find and replace stuff in the e-mail but these don't work as a rule on incoming e-mails, they only work on outgoing / creating e-mails. Does anyone have such a script that they can share with me? I would like to find & replace a certain text on every incoming e-mail. I would really apreciate it! Regards, Kris ps: I don't know how to program a whole VBA script, that's why I am asking here :) Current Code: Sub testing(MyMail As MailItem) Dim mail As MailItem Dim Inbox As Outlook.Folder Set Inbox = Session.GetDefaultFolder(olFolderInbox) For Each mail In Inbox.Items 'change subject mail.Subject = "TESTING" 'replace body text If mail.BodyFormat = olFormatHTML Then mail.HTMLBody = Replace(mail.HTMLBody, "Test 123", "TESTING") Else mail.Body = Replace(mail.Body, "Test 123", "TESTING") End If Next mail End Sub

    Read the article

  • Java mail attachment not working on Tomcat

    - by losintikfos
    Hello guys, I have an application which e-mails confirmations. The email part utilises Commons Mail API. The simple code which does the send mail is as shown below; import org.apache.commons.mail.*; ... // Create the attachment EmailAttachment attachment = new EmailAttachment(); attachment.setURL(new URL("http://cashew.org/doc.pdf")); attachment.setDisposition(EmailAttachment.ATTACHMENT); attachment.setDescription("Testing attach"); attachment.setName("doc.pdf"); // Create the email message MultiPartEmail email = new MultiPartEmail(); email.setHostName("mail.cashew.com"); email.addTo("[email protected]"); email.setFrom("[email protected]"); email.setSubject("Testing); email.setMsg("testing message"); // add the attachment email.attach(attachment); // send the email email.send(); My problem is, when I execute this application from Eclipse, I get email sent with attachment without any issues. But when i deploy the application to Tomcat server (I have tried both version 5 & 6 no joy), the e-mail is sent with below content; ------=_Part_0_25002283.1275298567928 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit testing Regards, los ------=_Part_0_25002283.1275298567928 Content-Type: application/pdf; name="doc.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="doc.pdf" Content-Description: Testing attach JVBERi0xLjQNJeLjz9MNCjYzIDAgb2JqDTw8L0xpbmVhcml6ZWQgMS9MIDMxMzE4Mi9PIDY1L0Ug Mjg2NjY5L04gMS9UIDMxMTgwMi9IIFsgMjgzNiAzNzZdPj4NZW5kb2JqDSAgICAgICAgICAgICAg DQp4cmVmDQo2MyAxMjcNCjAwMDAwMDAwMTYgMDAwMDAgbg0KMDAwMDAwMzM4MCAwMDAwMCBuDQow MDAwMDAzNTIzIDAwMDAwIG4NCjAwMDAwMDQzMDcgMDAwMDAgbg0KMDAwMDAwNTEwOSAwMDAwMCBu DQowMDAwMDA2Mjc5IDAwMDAwIG4NCjAwMDAwMDY0MTAgMDAwMDAgbg0KMDAwMDAwNjU0NiAwMDAw MCBuDQowMDAwMDA3OTY3IDAwMDAwIG4NCjAwMDAwMDkwMjMgMDAwMDAgbg0KMDAwMDAwOTk0OSAw MDAwMCBuDQowMDAwMDExMDAwIDAwMDAwIG4NCjAwMDAwMTIwNTkgMDAwMDAgbg0KMDAwMDAxMjky MCAwMDAwMCBuDQowMDAwMDEyOTU0IDAwMDAwIG4NCjAwMDAwMTI5ODIgMDAwMDAgbg0KMDAwMDAx ....... CnN0YXJ0eHJlZg0KMTE2DQolJUVPRg0K ------=_Part_0_25002283.1275298567928-- One thing also I have noticed is, the header information donot show TO and Subject values. Hmm pretty wierd. I have to point out that, above is not generated of DEBUG, it is the actual message recieved in my outlook client. Can someone help me please! Do anyone knows what's going on?

    Read the article

  • What is the "box model?"

    - by Chris
    During a recent interview for a front-end developer position I was asked what the box model was. I thought the interviewer was referring to testing (i.e. white box testing, black box testing). I was wrong. What is the box model, in reference to front-end development?

    Read the article

  • Passing a string representing my format specifier into stringWithFormat and seeing issues

    - by Jeff
    If I call "[NSString stringWithFormat:@"Testing \n %@",variableString]"; I get what I would expect, which is Testing, followed by a new line, then the contents of variableString. However, if i try NSString *testString = @"Testing \n %@"; //forgive shorthand here [NSString stringWithFormat,testString,variableString] the output actually literally writes \n to the screen instead of a newline. any workaround to this? seems odd to me

    Read the article

  • Many DIVs inside parent DIV, CSS height issue

    - by Benjamin
    Hi everyone, I am putting together a dynamic photo gallery and getting stuck trying to place thumbnails. Basically I am trying to place each thumbnail and caption in its own DIV, floated to the left. The thumbnails are working just as I want them to but for some reason the parent DIV refuses to cover the height of the thumbnail area. Here is the CSS I am using.. #galleryBox { width: 650px; background: #fff; margin: auto; padding: 5px; text-align: center; } .item { display: block; margin: 10px; padding: 10px; float: left; background: #353535; min-width: 120px; } .label { display: block; color: #fff; } I have tried height: auto and that hasn't done anything. Here is what I am trying to style: <div id="galleryBox" class="ui-corner-all"> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> <div class="item ui-corner-all"> <img src="http://tapp-essexvfd.org/images/ajax-loader.gif" alt="test"/><br/> <p><span class="label">Testing</span></p> </div> </div> Thanks!

    Read the article

  • Python logger dynamic filename

    - by sharjeel
    I want to configure my Python logger in such a way so that each instance of logger should log in a file having the same name as the name of the logger itself. e.g.: log_hm = logging.getLogger('healthmonitor') log_hm.info("Testing Log") # Should log to /some/path/healthmonitor.log log_sc = logging.getLogger('scripts') log_sc.debug("Testing Scripts") # Should log to /some/path/scripts.log log_cr = logging.getLogger('cron') log_cr.info("Testing cron") # Should log to /some/path/cron.log I want to keep it generic and dont want to hardcode all kind of logger names I can have. Is that possible?

    Read the article

  • What is dot and hash symbols mean in JQuery

    - by kwokwai
    Hi all, I feel confused of the dot and hash symbols in the following example: <DIV ID="row"> <DIV ID="c1"> <Input type="radio" name="testing" id="testing" VALUE="1">testing1 </DIV> </DIV> Code 1: $('#row DIV').mouseover(function(){ $('#row DIV).addClass('testing'); }); Code 2 $('.row div').mouseover(function(){ $(this).addClass('testing'); });? Codes 1 and 2 look very similar, and so it makes me so confused that when I should use ".row div" to refer to a specific DIV instead of using "#row div" ?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >