Search Results

Search found 7541 results on 302 pages for 'team spirit'.

Page 32/302 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • VS 2010 ALM Whitepapers &ndash; link from Neno Loje

    - by johndoucette
    Overview of Visual Studio ALM Whitepapers by Microsoft Overview Visual Studio 2010 Quick Reference Guidance Installation, Configuration & Administration Team Foundation Installation Guide for Visual Studio Team System 2010 Administration Guide for Microsoft Visual Studio 2010 Team Foundation Server Visual Studio 2010 TFS Upgrade Guide Visual Studio 2010 and Team Foundation Server 2010 VM Factory TFS Integration Platform Visual Studio 2010 Licensing White Paper Requirements Visual Studio 2010 Team Foundation Server Requirements Management Guidance Version Control & Configuration Management Visual Studio TFS Branching Guide 2010 Some guide or whitepaper missing? Let me know!

    Read the article

  • Win a iPad Mini in December with the Oracle Partners Enablement team!

    - by Richard Lefebvre
    Oracle December Calendar is available now! As an Oracle Partner, come back every day in December to the OPN-Enablement Blog and open a door to answer a question about Oracle and his Partner Specialization. Every right answer leads one step further towards a winning opportunity for an iPad mini** To get there, directly click on one of the following links:  https://blogs.oracle.com/opnenablement https://blogs.oracle.com/opnenablement/entry/oracle_december_calender **Participation is limited to Oracle Partner employees only

    Read the article

  • What software do you use to help plan your team work, and why?

    - by Alex Feinman
    Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have you had with a particular tool?

    Read the article

  • Advice Required Regarding Creating a Self Learning, Self Organizing Programming Team....

    - by tGilani
    Hello I'm a senior student at my university and chairperson of IEEE Student Branch there. Recently I was thinking of some idea to acquaint students with the professional environment, how software is produced in the industry and get a practical experience.. Obviously trips to software houses are not enough and we cannot provide this many internships. So the idea of simulating a software house within the university popped in. Resources at my disposal are students with their own laptops, university UPS and lan network with internet access, and a reasonably sized room with a whiteboard and three hours free time daily.. :) However, I have absolutely no idea where to begin with. Milestones or whatever it may be called, are Requirements Document generation, sharing of resources, delegation of tasks, version controlling etc... I'd really appreciate some advice, programming tools (for JAVA), communication tools etc and other things used in a decent software house... Technologies to be targeted shall be random possibly starting with J2EE Spring Hibernate and Later Visual Programming in .NET C# and ASP.NET MVC as well as Android or iPhone development....

    Read the article

  • If you were the manager of a team of 25 developers, how would you motivate them?

    - by Pierre 303
    Imagine yourself hired by a new startup backed with few millions coming from venture capitalists. Your mission: organize the development of the next killer app. 25 developers is too much to take care of each individually, so what decision(s) you would make to motivate them? I will appreciate any answers from stock options to free cookies ;) Of course the trick here (unless you are really a manager of a such startup), is put yourself in the shoes of one of those programmers. EDIT: it's an imaginary context. The purpose of this story is to stimulate your wishes. I want to capture what motivates developers.

    Read the article

  • If most of team can't follow the architecture, what do you do?

    - by Chris
    Hi all, I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker. Here's a little background information: The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance. The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names. Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page. My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach. My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development. Thanks for your comments.

    Read the article

  • How to access / query Team Foundation Server 2012 with Odata?

    - by cseder
    I've tried to find a solution for this for hours now, and I'm getting the same results in the end, asking me to install a lot of Azure and other stuff, plus running some example project .sln that I can't open with my 2012 version of Visual Studio. So, I'm pretty much stuck, and have some pretty straight forward questions regarding this: Does TFS 2012 include the Odata service in any way, so that I don't have to install it? If not, how can I install a NATIVE 2012 version of the Odata service for TFS 2012? Is it possible that I'm aiming for the wrong target here? I'm looking for a solution to the following: I have a TFS 2012 Server that I need to be able to create Work Items on programatically, based on data from our Help Desk system. Then I need to query these Work Items for changed status since its creation, and update the Help Desk Database. Am I better off using the "regular" TFS API? I was kinda thinking that the Odata way was more "future proof", but I'm not sure...

    Read the article

  • How can I make refactoring a priority for my team?

    - by Joseph Garland
    The codebase I work with daily has no automated tests, inconsistent naming and tons of comments like "Why is this here?", "Not sure if this is needed" or "This method isn't named right" and the code is littered with "Changelogs" despite the fact we use source control. Suffice it to say, our codebase could use refactoring. We always have tasks to fix bugs or add new features, so no time is put aside to refactor code to be better and more modular, and it doesn't seem to be a high priority. How can I demonstrate the value of refactoring such that it gets added to our task lists? Is it worth it to just refactor as I go, asking for forgiveness rather than permission?

    Read the article

  • the problem i have now visual studio team know.

    - by anirudha
    once upon a time i have a problem in visual studio that i declare here. they never thing what i want to say and i can't tell them clearly as they want. today i got the mail from visual studio 2010 connect that they tell me " problem can be solved in future version" that's a good thing i feel something better because anyone can't believes but i tell them many time. atleast they hear what i want to say. but i never believe on them even before that i get the new version of visual studio. means may be believe possible after the new version. because i know that they are slow slow and slow. last time i found that they tell that they launch visual studio on 20 march  but atleast they launched them on 12 april as adobe lauch their other product in creative series the link for connect is here

    Read the article

  • Best books on Managing a Software Development Team? [closed]

    - by JohnFx
    The canonical books on software development is fairly well established. However, after reading through a dreadful book full of bad advice on managing programming teams this weekend I am looking for recommendations for really good books that focus on the management side of programming (recruiting, performance measurement/management, motivation, best practices, organizational structure, etc.) and not as much on the construction of software itself. Any suggestions?

    Read the article

  • How to model a system to help my team grasp the project's bigger picture? [closed]

    - by user1796528
    According to the software engineering point of view, I should model the system to make it easier for other people to understand well what they work on. To do so, I have used the Dia drawing program. But, after having used Dia for some time, I find that it falls short in helping me to correctly and efficiently model my project. How do you usually tackle this problem (modelling a project in the large) and what tools would you recommend for the job, and why?

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • How should a team share/store game content during development?

    - by irwinb
    Other than Dropbox, what out there has been especially useful for storing and sharing game content like images during development (similar feature set to Dropbox like working offline, automatic syncing and support for windows/osx)? We are looking into hosting our own SharePoint server but it seems to be really focused on documents... Maybe Box.net would work? EDIT For code, we are using Git. To be more precise, I was looking for an easy, automatic way for content produced by artists/audio engineers to be available to everyone. Features like approvals of assets don't hurt either. Following the answer linked by Tetrad, Alienbrain looked pretty interesting but..is way out of our budget (may be something to invest in in the future). What ended up doing... We were going to go with Box.net but downloading the sync apps for desktop use required us to wait to be contacted by them for some reason. We did not have much time to wait so we ended up going with Dropbox Teams. Box.net has a nice feature set but we never really felt held back without them. Thanks for the help :).

    Read the article

  • .NET interview, code structure and the design

    - by j_lewis
    I have been given the below .NET question in an interview. I don’t know why I got low marks. Unfortunately I did not get a feedback. Question: The file hockey.csv contains the results from the Hockey Premier League. The columns ‘For’ and ‘Against’ contain the total number of goals scored for and against each team in that season (so Alabama scored 79 goals against opponents, and had 36 goals scored against them). Write a program to print the name of the team with the smallest difference in ‘for’ and ‘against’ goals. the structure of the hockey.csv looks like this (it is a valid csv file, but I just copied the values here to get an idea) Team - For - Against Alabama 79 36 Washinton 67 30 Indiana 87 45 Newcastle 74 52 Florida 53 37 New York 46 47 Sunderland 29 51 Lova 41 64 Nevada 33 63 Boston 30 64 Nevada 33 63 Boston 30 64 Solution: class Program { static void Main(string[] args) { string path = @"C:\Users\<valid csv path>"; var resultEvaluator = new ResultEvaluator(string.Format(@"{0}\{1}",path, "hockey.csv")); var team = resultEvaluator.GetTeamSmallestDifferenceForAgainst(); Console.WriteLine( string.Format("Smallest difference in ‘For’ and ‘Against’ goals > TEAM: {0}, GOALS DIF: {1}", team.Name, team.Difference )); Console.ReadLine(); } } public interface IResultEvaluator { Team GetTeamSmallestDifferenceForAgainst(); } public class ResultEvaluator : IResultEvaluator { private static DataTable leagueDataTable; private readonly string filePath; private readonly ICsvExtractor csvExtractor; public ResultEvaluator(string filePath){ this.filePath = filePath; csvExtractor = new CsvExtractor(); } private DataTable LeagueDataTable{ get { if (leagueDataTable == null) { leagueDataTable = csvExtractor.GetDataTable(filePath); } return leagueDataTable; } } public Team GetTeamSmallestDifferenceForAgainst() { var teams = GetTeams(); var lowestTeam = teams.OrderBy(p => p.Difference).First(); return lowestTeam; } private IEnumerable<Team> GetTeams() { IList<Team> list = new List<Team>(); foreach (DataRow row in LeagueDataTable.Rows) { var name = row["Team"].ToString(); var @for = int.Parse(row["For"].ToString()); var against = int.Parse(row["Against"].ToString()); var team = new Team(name, against, @for); list.Add(team); } return list; } } public interface ICsvExtractor { DataTable GetDataTable(string csvFilePath); } public class CsvExtractor : ICsvExtractor { public DataTable GetDataTable(string csvFilePath) { var lines = File.ReadAllLines(csvFilePath); string[] fields; fields = lines[0].Split(new[] { ',' }); int columns = fields.GetLength(0); var dt = new DataTable(); //always assume 1st row is the column name. for (int i = 0; i < columns; i++) { dt.Columns.Add(fields[i].ToLower(), typeof(string)); } DataRow row; for (int i = 1; i < lines.GetLength(0); i++) { fields = lines[i].Split(new char[] { ',' }); row = dt.NewRow(); for (int f = 0; f < columns; f++) row[f] = fields[f]; dt.Rows.Add(row); } return dt; } } public class Team { public Team(string name, int against, int @for) { Name = name; Against = against; For = @for; } public string Name { get; private set; } public int Against { get; private set; } public int For { get; private set; } public int Difference { get { return (For - Against); } } } Output: Smallest difference in for' andagainst' goals TEAM: Boston, GOALS DIF: -34 Can someone please review my code and see anything obviously wrong here? They were only interested in the structure/design of the code and whether the program produces the correct result (i.e lowest difference). Much appreciated. "P.S - Please correct me if the ".net-interview" tag is not the right tag to use"

    Read the article

  • How can I obtain a list of modified files within a 'changeset' range from team foundation server thr

    - by Colin W
    I'm trying to create a tool which will help my team perform code reviews on a more regular basis without it's usual massive overhead. At the moment the manual process involves using Team Foundation Sidekicks to identify the Changesets and then exporting that to excel to filter the results to find which items need to be reviewed (e.g. code files). I've heard mention of using a TFS API, but found very little help online possibly because I was asking google 'the wrong questions'.

    Read the article

  • TFS API Add Favorites programmatically

    - by Tarun Arora
    01 – What are we trying to achieve? In this blog post I’ll be showing you how to add work item queries as favorites, it is also possible to use the same technique to add build definition as favorites. Once a shared query or build definition has been added as favorite it will show up on the team web access.  In this blog post I’ll be showing you a work around in the absence of a proper API how you can add queries to team favorites. 02 – Disclaimer There is no official API for adding favorites programmatically. In the work around below I am using the Identity service to store this data in a property bag which is used during display of favorites on the team web site. This uses an internal data structure that could change over time, there is no guarantee about the key names or content of the values. What is shown below is a workaround for a missing API. 03 – Concept There is no direct API support for favorites, but you could work around it using the identity service in TFS.  Favorites are stored in the property bag associated with the TeamFoundationIdentity (either the ‘team’ identity or the users identity depending on if these are ‘team’ or ‘my’ favorites).  The data is stored as json in the property bag of the identity, the key being prefixed by ‘Microsoft.TeamFoundation.Framework.Server.IdentityFavorites’. References - Microsoft.TeamFoundation.WorkItemTracking.Client - using Microsoft.TeamFoundation.Client; - using Microsoft.TeamFoundation.Framework.Client; - using Microsoft.TeamFoundation.Framework.Common; - using Microsoft.TeamFoundation.ProcessConfiguration.Client; - using Microsoft.TeamFoundation.Server; - using Microsoft.TeamFoundation.WorkItemTracking.Client; Services - IIdentityManagementService2 - TfsTeamService - WorkItemStore 04 – Solution Lets start by connecting to TFS programmatically // Create an instance of the services to be used during the program private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; private static TfsTeamService _tts; private static TeamSettingsConfigurationService _teamConfig; private static IIdentityManagementService2 _ids; // Connect to TFS programmatically public static bool ConnectToTfs() { var isSelected = false; var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } Lets get all the work item queries from the selected team project static readonly Dictionary<string, string> QueryAndGuid = new Dictionary<string, string>(); // Get all queries and query guid in the selected team project private static void GetQueryGuidList(IEnumerable<QueryItem> query) { foreach (QueryItem subQuery in query) { if (subQuery.GetType() == typeof(QueryFolder)) GetQueryGuidList((QueryFolder)subQuery); else { QueryAndGuid.Add(subQuery.Name, subQuery.Id.ToString()); } } }   Pass the name of a valid Team in your team project and a name of a valid query in your team project. The team details will be extracted using the team name and query GUID will be extracted using the query name. These details will be used to construct the key and value that will be passed to the SetProperty method in the Identity service.           Key           “Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..<TeamProjectURI>.<TeamId>.WorkItemTracking.Queries.<newGuid1>”           Value           "{"data":"<QueryGuid>","id":"<NewGuid1>","name":"<QueryKey>","type":"Microsoft.TeamFoundation.WorkItemTracking.QueryItem”}"           // Configure a Work Item Query for the given team private static void ConfigureTeamFavorites(string teamName, string queryName) { _ids = _tfs.GetService<IIdentityManagementService2>(); var g = Guid.NewGuid(); var guid = string.Empty; var teamDetail = _tts.QueryTeams(_selectedTeamProject.Uri).FirstOrDefault(t => t.Name == teamName); foreach (var q in QueryAndGuid.Where(q => q.Key == queryName)) { guid = q.Value; } if(guid == string.Empty) { Console.WriteLine("Query '{0}' - Not found!", queryName); return; } var key = string.Format( "Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..{0}.{1}.WorkItemTracking.Queries{2}", new Uri(_selectedTeamProject.Uri).Segments.LastOrDefault(), teamDetail.Identity.TeamFoundationId, g); var value = string.Format( @"{0}""data"":""{1}"",""id"":""{2}"",""name"":""{3}"",""type"":""Microsoft.TeamFoundation.WorkItemTracking.QueryItem""{4}", "{", guid, g, QueryAndGuid.FirstOrDefault(q => q.Value==guid).Key, "}"); teamDetail.Identity.SetProperty(IdentityPropertyScope.Local, key, value); _ids.UpdateExtendedProperties(teamDetail.Identity); Console.WriteLine("{0}Added Query '{1}' as Favorite", Environment.NewLine, queryName); }   If you have any questions or suggestions leave a comment. Enjoy!

    Read the article

  • What parser generator do you recommend

    - by stefan.ciobaca
    I'm currently shopping for a FOSS parser generator for a project of mine. It has to support either C or C++. I've looked at bison/flex and at boost::spirit. I went from writing my own to spirit to bison to spirit to bison to spirit, each time hit by some feature I found unpleasant. The thing I hate most about bison/flex is that they actually generate C/C++ source for you. There are a number of disadvantages to this, e.g. debugging. I like spirit from this point of view, but I find it very very heavy on syntax. I am curious about what you are using, what you would recommend, and general thoughts about the state of the art in parser generators. I am also curious to hear about approaches being used in other languages for parsing problems.

    Read the article

  • Interviewer question: What is the greatest strength and weakness of your development team? Any downside to ask?

    - by epignosisx
    I was wondering if this was a good question to ask a possible employer: What is the greatest strength and weakness of your development team? We all get this question when we are in an interview. So why not ask them? It's not just to annoy them. I think it is a very good question. By asking this question to future employers we could find out about the team and how this strength or weakness could affect us. What do you think? Do you see any downside to asking this question?

    Read the article

  • TFS Shipping Cadence

    - by Tarun Arora
    Brian Harry has formally announced a change to the TFS shipping cadence from the traditional 2-3 years to production cycle to a more agile and refreshing minimum once in a 3 weeks cycle! The change didn’t happen over night, it was a gradual process which was greatly influenced by moving TFS to the cloud. The thinking started with trying to figure out what the team wanted.  Like people often do, the team started with what they knew and tried to evolve from there.  The team spent a few months thinking through “What if we do major releases every year and minor releases every 6 months?”,  “Major releases every 6 months, patches once a month?”, “What if we do quarterly releases – can we get the release cycle going that fast?”, etc.  The team also spent time debating what constitutes a major release VS a minor release. How much churn are customers willing to tolerate?  The team finally concluded… “When a change this big is necessary – forget where you are and just ask where you want to be and then ask what it would take to get there.” Going forward you will see, Team Foundation Service updates every 3 weeks Visual Studio Client updates quarterly (Limited to VS 2012 for now) Team Foundation Server updates more frequent than every 2 years but details still being worked out.  The team will definitely deliver one this fall. Refer to the complete blog post from Brian Harry here.

    Read the article

  • Problems with Update Manager

    - by user65965
    Whenever I try to update with update manager I get the following errors: W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-updates/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-backports/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise-security/Release Unable to find expected entry 'commercial/source/Sources' in Release file (Wrong sources.list entry or malformed file) W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/source/Sources 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W:Failed to fetch http://ppa.launchpad.net/iefremov/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E:Some index files failed to download. They have been ignored, or old ones used instead. Any help would be much appreciated, thank you. Thank you very much Eliah. I'm still pretty new to Ubuntu. Here's the output I got from the terminal: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise restricted main commercial multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-updates restricted main commercial multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse commercial #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security main restricted commercial deb-src http://archive.ubuntu.com/ubuntu precise-security restricted main commercial multiverse universe #Added by software-properties deb http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main ## This is a 3rd party script to install and update Oracle Java deb http://www.duinsoft.nl/pkg debs all ## Sun-Java6-JRE deb http://security.ubuntu.com/ubuntu hardy-security main multiverse ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/askubuntu-tools-ppa-precise.list.save: deb http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/askubuntu-tools/ppa/ubuntu precise main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu oneiric main ** /etc/apt/sources.list.d/effie-jayx-turpial-oneiric.list.save: deb http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/effie-jayx/turpial/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/getdeb.list.distUpgrade: deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps ** /etc/apt/sources.list.d/getdeb.list.save: # deb http://archive.getdeb.net/ubuntu oneiric-getdeb apps # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.distUpgrade: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu oneiric main ** /etc/apt/sources.list.d/hotot-team-ppa-oneiric.list.save: deb http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise deb-src http://ppa.launchpad.net/hotot-team/ppa/ubuntu precise main # disabled on upgrade to precise ** /etc/apt/sources.list.d/iefremov-ppa-precise.list: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/iefremov-ppa-precise.list.save: deb http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/iefremov/ppa/ubuntu precise main ** /etc/apt/sources.list.d/jockey.list: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/jockey.list.distUpgrade: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree ** /etc/apt/sources.list.d/jockey.list.save: deb http://www.openprinting.org/download/printdriver/debian/ lsb3.2 main-nonfree # disabled on upgrade to precise ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/plexydesk-plexydesk-dailybuild-precise.list.save: deb http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main deb-src http://ppa.launchpad.net/plexydesk/plexydesk-dailybuild/ubuntu precise main ** /etc/apt/sources.list.d/precise-partner.list: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/precise-partner.list.save: deb http://archive.canonical.com/ubuntu precise partner #Added by software-center ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list: # deb https://justin-dormandy:[email protected]/commercial-ppa-uploaders/crossover-pro/ubuntu precise main #Added by software-center disabled on upgrade to precise ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.distUpgrade: Permission denied ** /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: cat: /etc/apt/sources.list.d/private-ppa.launchpad.net_commercial-ppa-uploaders_crossover-pro_ubuntu.list.save: Permission denied ** /etc/apt/sources.list.d/screenlets-ppa-precise.list: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/screenlets-ppa-precise.list.save: deb http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main deb-src http://ppa.launchpad.net/screenlets/ppa/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main ** /etc/apt/sources.list.d/webupd8team-java-precise.list.save: deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How do you manage your time as a team leader?

    - by Bryan Slatner
    Where I work, my role has been evolving from a pure development role to team leadership. I find that this suits me, and I'm generally enjoying it. One aspect of the job that continually vexes me, though, is time management. My day used to be pure coding. Now, I still have a largely full plate of coding duties, but I'm expected to mentor other developers, work on requirements, make design decisions for other developers, evaluate bug reports from users, assign them to developers, and so on. I find that my day has become on interruption after another and the prolonged periods of sustained concentration needed to get any actual quality coding done are becoming rarer and rarer. Today, I finally grabbed my laptop and escaped to a coffee shop so I could get some actual work done. How do the team leads here manage their day -- or manage their workplace -- so they don't let their administrative tasks overwhelm them?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >