Search Results

Search found 7382 results on 296 pages for 'team joki'.

Page 53/296 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How to loop through an array return from the Query of Mysql

    - by Jerry
    This might be easy for you guys but i could't get it. I have a php class that query the database and return the query result. I assign the result to an array and wants to use it on my main.php script. I have tried to use echo $var[0] or echo $var[1] but the output are 'array' instead of my value. Anyone can help me about this issue? Thanks a lot! My php class <?php class teamQuery { function teamQuery(){ } function getAllTeam(){ $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } $teamQuery=mysql_query("SELECT * FROM team", $connection); if (!$teamQuery){ die("database has errors: ".mysql_error()); } $ret = array(); while($row=mysql_fetch_array($teamQuery)){ $ret[]=$row; } mysql_free_result($teamQuery); return $ret; } } ?> My php on the main.php $getTeam=new teamQuery(); $team=$getTeam->getAllTeam(); //echo $team[0] or team[1] output 'array' string! // while($team){ // do something } can't work either // How to loop through the values?? Thanks!

    Read the article

  • ordering a collection by an association's property

    - by neiled
    class Person belongs_to :team class Status #has last_updated property class Team has_many :members, :class => "Person" Ok, so I have a Team class which has many People in it and each of those people has a status and each status has a last_updated property. I'm currently rendering a partial with a collection similar to: =render :partial => "user", :collection => current_user.team.members Now how do I go about sorting the collection by the last_updated property of the Status class? Thanks in advance! p.s. I've just written the ruby code from memory, it's just an example, it's not meant to compile but I hope you get the idea!

    Read the article

  • How to deal with 2 almost identical tables

    - by jgritty
    I have a table of baseball stats, something like this: CREATE TABLE batting_stats( ab INTEGER, pa INTEGER, r INTEGER, h INTEGER, hr INTEGER, rbi INTEGER, playerID INTEGER, FOREIGN KEY(playerID) REFERENCES player(playerID) ); But then I have a table of stats that are basically exactly the same, but for a team: CREATE TABLE team_batting_stats( ab INTEGER, pa INTEGER, r INTEGER, h INTEGER, hr INTEGER, rbi INTEGER, teamID INTEGER, FOREIGN KEY(teamID) REFERENCES team(teamID) ); My first instinct is to scrap the Foreign key and generalize the ID, but I still have a problem, I have these 2 tables, and they can't have overlapping IDs: CREATE TABLE player( playerID INTEGER PRIMARY KEY, firstname TEXT, lastname TEXT, number INTEGER, teamID INTEGER, FOREIGN KEY(teamID) REFERENCES team(teamID) ); CREATE TABLE team( teamID INTEGER PRIMARY KEY, name TEXT, city TEXT, ); I feel like I'm overlooking something obvious that could solve this problem and reduce stats to a single table.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Automatically create bug resolution task using the TFS 2010 API

    - by Bob Hardister
    My customer requires bug resolution to be approved and tracked.  To minimize the overhead for developers I implemented a TFS 2010 server-side plug-in to automatically create a child resolution task for the bug when the “CCB” field is set to approved. The CCB field is a custom field.  I also added the story points field to the bug WIT for sizing purposes. Redundant tasks will not be created unless the bug title is changed or the prior task is closed. The program writes an audit trail to a log file visible in the TFS Admin Console Log view. Here’s the code. BugAutoTask.cs /* SPECIFICATION * When the CCB field on the bug is set to approved, create a child task where the task: * name = Resolve bug [ID] - [Title of bug] * assigned to = same as assigned to field on the bug * same area path * same iteration path * activity = Bug Resolution * original estimate = bug points * * The source code is used to build a dll (Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll), * which needs to be copied to * C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins * on ALL TFS application-tier servers. * * Author: Bob Hardister. */ using System; using System.Collections.Generic; using System.IO; using System.Xml; using System.Text; using System.Diagnostics; using System.Linq; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.WorkItemTracking.Server; using Microsoft.TeamFoundation.Client; using System.Collections; namespace BugAutoTaskCreation { public class BugAutoTask : ISubscriber { public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { statusCode = 0; properties = null; statusMessage = String.Empty; // Error message for for tracing last code executed and optional fields string lastStep = "No field values found or set "; try { if ((notificationType == NotificationType.Notification) && (notificationEventArgs.GetType() == typeof(WorkItemChangedEvent))) { WorkItemChangedEvent workItemChange = (WorkItemChangedEvent)notificationEventArgs; // see ConnectToTFS() method below to select which TFS instance/collection // to connect to TfsTeamProjectCollection tfs = ConnectToTFS(); WorkItemStore wiStore = tfs.GetService<WorkItemStore>(); lastStep = lastStep + ": connection to TFS successful "; // Get the work item that was just changed by the user. WorkItem witem = wiStore.GetWorkItem(workItemChange.CoreFields.IntegerFields[0].NewValue); lastStep = lastStep + ": retrieved changed work item, ID:" + witem.Id + " "; // Filter for Bug work items only if (witem.Type.Name == "Bug") { // DEBUG lastStep = lastStep + ": changed work item is a bug "; // Filter for CCB (i.e. Baseline Status) field set to approved only bool BaselineStatusChange = false; if (workItemChange.ChangedFields != null) { ProcessBugRevision(ref lastStep, workItemChange, wiStore, ref witem, ref BaselineStatusChange); } } } } catch (Exception e) { Trace.WriteLine(e.Message); Logger log = new Logger(); log.WriteLineToLog(MsgLevel.Error, "Application error: " + lastStep + " - " + e.Message + " - " + e.InnerException); } statusCode = 1; statusMessage = "Bug Auto Task Evaluation Completed"; properties = null; return EventNotificationStatus.ActionApproved; } // PRIVATE METHODS private static void ProcessBugRevision(ref string lastStep, WorkItemChangedEvent workItemChange, WorkItemStore wiStore, ref WorkItem witem, ref bool BaselineStatusChange) { foreach (StringField field in workItemChange.ChangedFields.StringFields) { // DEBUG lastStep = lastStep + ": last changed field is - " + field.Name + " "; if (field.Name == "Baseline Status") { lastStep = lastStep + ": retrieved bug baseline status field value, bug ID:" + witem.Id + " "; BaselineStatusChange = (field.NewValue != field.OldValue); if ((BaselineStatusChange) && (field.NewValue == "Approved")) { // Instanciate logger Logger log = new Logger(); // *** Create resolution task for this bug *** // ******************************************* // Get the team project and selected field values of the bug work item Project teamProject = witem.Project; int bugID = witem.Id; string bugTitle = witem.Fields["System.Title"].Value.ToString(); string bugAssignedTo = witem.Fields["System.AssignedTo"].Value.ToString(); string bugAreaPath = witem.Fields["System.AreaPath"].Value.ToString(); string bugIterationPath = witem.Fields["System.IterationPath"].Value.ToString(); string bugChangedBy = witem.Fields["System.ChangedBy"].OriginalValue.ToString(); string bugTeamProject = witem.Project.Name; lastStep = lastStep + ": all mandatory bug field values found "; // Optional fields Field bugPoints = witem.Fields["Microsoft.VSTS.Scheduling.StoryPoints"]; if (bugPoints.Value != null) { lastStep = lastStep + ": all mandatory and optional bug field values found "; } // Initialize child resolution task title string childTaskTitle = "Resolve bug " + bugID + " - " + bugTitle; // At this point I can check if a resolution task (of the same name) // for the bug already exist // If so, do not create a new resolution task bool createResolutionTask = true; WorkItem parentBug = wiStore.GetWorkItem(bugID); WorkItemLinkCollection links = parentBug.WorkItemLinks; foreach (WorkItemLink wil in links) { if (wil.LinkTypeEnd.Name == "Child") { WorkItem childTask = wiStore.GetWorkItem(wil.TargetId); if ((childTask.Title == childTaskTitle) && (childTask.State != "Closed")) { createResolutionTask = false; log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID: " + bugID + ". Task not created as open one of the same name already exist, ID:" + childTask.Id); } } } if (createResolutionTask) { // Define the work item type of the new work item WorkItemTypeCollection workItemTypes = wiStore.Projects[teamProject.Name].WorkItemTypes; WorkItemType wiType = workItemTypes["Task"]; // Setup the new task and assign field values witem = new WorkItem(wiType); witem.Fields["System.Title"].Value = "Resolve bug " + bugID + " - " + bugTitle; witem.Fields["System.AssignedTo"].Value = bugAssignedTo; witem.Fields["System.AreaPath"].Value = bugAreaPath; witem.Fields["System.IterationPath"].Value = bugIterationPath; witem.Fields["Microsoft.VSTS.Common.Activity"].Value = "Bug Resolution"; lastStep = lastStep + ": all mandatory task field values set "; // Optional fields if (bugPoints.Value != null) { witem.Fields["Microsoft.VSTS.Scheduling.OriginalEstimate"].Value = bugPoints.Value; lastStep = lastStep + ": all mandatory and optional task field values set "; } // Check for validation errors before saving the new task and linking it to the bug ArrayList validationErrors = witem.Validate(); if (validationErrors.Count == 0) { witem.Save(); // Link the new task (child) to the bug (parent) var linkType = wiStore.WorkItemLinkTypes[CoreLinkTypeReferenceNames.Hierarchy]; // Fetch the work items to be linked var parentWorkItem = wiStore.GetWorkItem(bugID); int taskID = witem.Id; var childWorkItem = wiStore.GetWorkItem(taskID); // Add a new link to the parent relating the child and save it parentWorkItem.Links.Add(new WorkItemLink(linkType.ForwardEnd, childWorkItem.Id)); parentWorkItem.Save(); log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID:" + bugID + ", which automatically created child resolution task, ID:" + taskID); } else { log.WriteLineToLog(MsgLevel.Error, "Error in creating bug resolution child task for bug ID:" + bugID); foreach (Field taskField in validationErrors) { log.WriteLineToLog(MsgLevel.Error, " - Validation Error in task field: " + taskField.ReferenceName); } } } } } } } private TfsTeamProjectCollection ConnectToTFS() { // Connect to TFS string tfsUri = string.Empty; // Production TFS instance production collection tfsUri = @"xxxx"; // Production TFS instance admin collection //tfsUri = @"xxxxx"; // Local TFS testing instance default collection //tfsUri = @"xxxxx"; TfsTeamProjectCollection tfs = new TfsTeamProjectCollection(new System.Uri(tfsUri)); tfs.EnsureAuthenticated(); return tfs; } // HELPERS public string Name { get { return "Bug Auto Task Creation Event Handler"; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public enum MsgLevel { Info, Warning, Error }; public Type[] SubscribedTypes() { return new Type[1] { typeof(WorkItemChangedEvent) }; } } } Logger.cs using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace BugAutoTaskCreation { class Logger { // fields private string _ApplicationDirectory = @"C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs"; private string _LogFileName = @"\CFG_ACCT_AT_OWS_BugAutoTaskCreation.log"; private string _LogFile; private string _LogTimestamp = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); private string _MsgLevelText = string.Empty; // default constructor public Logger() { // check for a prior log file FileInfo logFile = new FileInfo(_ApplicationDirectory + _LogFileName); if (!logFile.Exists) { CreateNewLogFile(ref logFile); } } // properties public string ApplicationDirectory { get { return _ApplicationDirectory; } set { _ApplicationDirectory = value; } } public string LogFile { get { _LogFile = _ApplicationDirectory + _LogFileName; return _LogFile; } set { _LogFile = value; } } // PUBLIC METHODS public void WriteLineToLog(BugAutoTask.MsgLevel msgLevel, string logRecord) { try { // set msgLevel text if (msgLevel == BugAutoTask.MsgLevel.Info) { _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Warning) { _MsgLevelText = "[Warning @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Error) { _MsgLevelText = "[Error @" + MsgTimeStamp() + "] "; } else { _MsgLevelText = "[Error: unsupported message level @" + MsgTimeStamp() + "] "; } // write a line to the log file StreamWriter logFile = new StreamWriter(_ApplicationDirectory + _LogFileName, true); logFile.WriteLine(_MsgLevelText + logRecord); logFile.Close(); } catch (Exception) { throw; } } // PRIVATE METHODS private void CreateNewLogFile(ref FileInfo logFile) { try { string logFilePath = logFile.FullName; // write the log file header _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; string cpu = string.Empty; if (Environment.Is64BitOperatingSystem) { cpu = " (x64)"; } StreamWriter newLog = new StreamWriter(logFilePath, false); newLog.Flush(); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText + "Team Foundation Server Administration Log"); newLog.WriteLine(_MsgLevelText + "Version : " + "1.0.0 Author: Bob Hardister"); newLog.WriteLine(_MsgLevelText + "DateTime : " + _LogTimestamp); newLog.WriteLine(_MsgLevelText + "Type : " + "OWS Custom TFS API Plug-in"); newLog.WriteLine(_MsgLevelText + "Activity : " + "Bug Auto Task Creation for CCB Approved Bugs"); newLog.WriteLine(_MsgLevelText + "Area : " + "Build Explorer"); newLog.WriteLine(_MsgLevelText + "Assembly : " + "Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll"); newLog.WriteLine(_MsgLevelText + "Location : " + @"C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins"); newLog.WriteLine(_MsgLevelText + "User : " + Environment.UserDomainName + @"\" + Environment.UserName); newLog.WriteLine(_MsgLevelText + "Machine : " + Environment.MachineName); newLog.WriteLine(_MsgLevelText + "System : " + Environment.OSVersion + cpu); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText); newLog.Close(); } catch (Exception) { throw; } } private string MsgTimeStamp() { string msgTimestamp = string.Empty; return msgTimestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss:fff"); } } }

    Read the article

  • SSMS Tools Pack 2.5.3 is out with bug fixes and improved licensing

    - by Mladen Prajdic
    Licensing for SSMS Tools Pack 2.5 has been quite a hit and I received some awesome feedback. The version 2.5.3 contains a few bug fixes and desired licensing improvements. Changes include more licensing options, prices in Euros because of book keeping reasons (don't you just love those :)) and generally easier purchase and licensing process for users. Licensing now offers four options: Per machine license. (€25) Perfect if you do all your work from a single machine. Plus one OS reinstall activation. Personal license (€75) Up to 4 machine activations. Plus 2 OS reinstall activations and any number of virtual machine activations. Team license (€240) Up to 10 machine activations. Plus 4 OS reinstall activations and any number of virtual machine activations. Enterprise license (€350+) For more than 10 machine activations any number of virtual machine activations. 30 days license. Time based demo license bound to a machine. You can view all the details on the Licensing page . If you want to receive email notifications when new version of SSMS Tools Pack is out you can do that on the Main page or on the Download page . Version 2.7 is expected in the first half of February and won't support SSMS 2005 and 2005 Express anymore. Enjoy it!

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • Clean Code Development & Flexible work environment - MSCC 26.10.2013

    Finally, some spare time to summarize my impressions and experiences of the recent meetup of Mauritius Software Craftsmanship Community. I already posted my comment on the event and on our social media networks: Professional - It's getting better with our meetups and I really appreciated that 'seniors' and 'juniors' were present today. Despite running a little bit out of time it was really great to see more students coming to the gathering. This time we changed location for our Saturday meetup and it worked out very well. A big thank you to Ebene Accelerator, namely Mrs Poonum, for the ability to use their meeting rooms for our community get-together. Already some weeks ago I had a very pleasant conversation with her about the MSCC aims, 'mission' and how we organise things. Additionally, I think that an environment like the Ebene Accelerator is a good choice as it acts as an incubator for young developers and start-ups. Reactions from other craftsmen Before I put my thoughts about our recent meeting down, I'd like to mention and cross-link to some of the other craftsmen that were present: "MSCC meet up is a massive knowledge gaining strategies for students, future entrepreneurs, or for geeks all around. Knowledge sharing becomes a fun. For those who have not been able to made it do subscribe on our MSCC meet up group at meetup.com." -- Nitin on Learning is fun with #MSCC #Ebene Accelerator "We then talked about the IT industry in Mauritius, salary issues in various field like system administration, software development etc. We analysed the reasons why people tend to hop from one company to another. That was a fun debate." -- Ish on MSCC meetup - Gang of Geeks "Flexible Learning Environment was quite interesting since these lines struck cords : "You're not a secretary....9 to 5 shouldn't suit you"....This allowed reflection...deep reflection....especially regarding the local mindset...which should be changed in a way which would promote creativity rather than choking it till death..." -- Yannick on 2nd MSCC Monthly Meet-up And others on Facebook... ;-) Visual impressions are available on our Meetup event page. More first time attendees We great pleasure I noticed that we have once again more first time visitors. A quick overlook showed that we had a majority of UoM students in first, second or last year. Some of them are already participating in the UoM Computer Club or are nominated as members of the Microsoft Student Partner (MSP) programme. Personally, I really appreciate the fact that the MSCC is able to gather such a broad audience. And as I wrote initially, the MSCC is technology-agnostic; we want IT people from any segment of this business. Of course, students which are about to delve into the 'real world' of working are highly welcome, and I hope that they might get one or other glimpse of experience or advice from employees. Sticking to the schedule? No, not really... And honestly, it was a good choice to go a little bit of the beaten tracks. I mean, yes we have a 'rough' agenda of topics that we would like to talk about or having a presentation about. But we keep it 'agile'. Due to the high number of new faces, we initiated another quick round of introductions and I gave a really brief overview of the MSCC. Next, we started to reflect on the Clean Code Developer (CCD) - Red Grade which we introduced on the last meetup. Nirvan was the lucky one and he did a good job on summarizing the various abbreviations of the first level of being a CCD. Actually, more interesting, we exchanged experience about the principles and practices of Red Grade, and it was very informative to get to know that Yann actually 'interviewed' a couple of friends, other students, local guys working in IT companies as well as some IT friends from India in order to counter-check on what he learned first-hand about Clean Code. Currently, he is reading the book of Robert C. Martin on that topic and I'm looking forward to his review soon. More output generates more input What seems to be like a personal mantra is working out pretty well for me since the beginning of this year. Being more active on social media networks, writing more article on my blog, starting the Mauritius Software Craftsmanship Community, and contributing more to other online communities has helped me to receive more project requests, job offers and possibilities to expand my business at IOS Indian Ocean Software Ltd. Actually, it is not a coincidence that one of the questions new craftsmen should answer during registration asks about having a personal blog. Whether you are just curious about IT, right in the middle of your Computer Studies, or already working in software development or system administration since a while you should consider to advertise and market yourself online. Easiest way to resolve this are to have online profiles on professional social media networks like LinkedIn, Xing, Twitter, and Google+ (no Facebook should be considered for private only), and considering to have a personal blog. Why? -- Be yourself, be proud of your work, and let other people know that you're passionate about your profession. Trust me, this is going to open up opportunities you might not have dreamt about... Exchanging ideas about having a professional online presence - MSCC meetup on the 26th October 2013 Furthermore, consider to put your Curriculum Vitae online, too. There are quite a number of service providers like 1ClickCV, Stack Overflow Careers 2.0, etc. which give you the ability to have an up to date CV online. At least put it on your site, next to your personal blog. Similar to what you would be able to see on my site here. Cyber Island Mauritius - are we there? A couple of weeks ago I got a 'cold' message on LinkedIn from someone living in the U.S. asking about the circumstances and conditions of the IT world of Mauritius. He has a great business idea, venture capital and is currently looking for a team of software developers (mainly mobile - iOS) for a new startup here in Mauritius. Since then we exchanged quite some details through private messages and Skype conversations, and I suggested that it might be a good chance to join our meetup through a conference call and see for yourself about potential candidates. During approximately 30 to 40 minutes the brief idea of the new startup was presented - very promising state-of-the-art technology aspects and integration of various public APIs -, and we had a good Q&A session about it. Also thanks to the excellent bandwidth provided by the Ebene Accelerator the video conference between three parties went absolutely well. Clean Code Developer - Orange Grade Hahaha - nice one... Being at the Orange Tower at Ebene and then talking about an Orange Grade as CCD. Well, once again I provided an overview of the principles and practices in that rank of Clean Code, and similar to our last meetup we discussed on the various aspect of each principle, whether someone already got in touch with it during studies or work, and how it could affect their future view on their source code. Following are the principles and practices of Clean Code Developer - Orange Grade: CCD Orange Grade - Principles Single Level of Abstraction (SLA) Single Responsibility Principle (SRP) Separation of Concerns (SoC) Source Code conventions CCD Orange Grade - Practices Issue Tracking Automated Integration Tests Reading, Reading, Reading Reviews Especially the part on reading technical books got some extra attention. We quickly gathered our views on that and came up with a result that ranges between Zero (0) and up to Fifteen (15) book titles per year. Personally, I'm keeping my progress between Six (6) and Eight (8) titles per year, but at least One (1) per quarter of a year. Which is also connected to the fact that I'm participating in the O'Reilly Reader Review Program and have a another benefit to get access to free books only by writing and publishing a review afterwards. We also had a good exchange on the extended topic of 'Reviews' - which to my opinion is abnormal difficult here in Mauritius for various reasons. As far as I can tell from my experience working with Mauritian software developers, either as colleagues, employees or during consulting services there are unfortunately two dominant pattern on that topic: Keeping quiet Running away Honestly, I have no evidence about why these are the two 'solutions' on reviews but that's the situation that I had to face over the last couple of years. Sitting together and talking about problematic issues, tackling down root causes of de-motivational activities and working on general improvements doesn't seem to have a ground within the IT world of Mauritius. Are you a typist or a creative software craftsman? - MSCC meetup on the 26th October 2013 One very good example that we talked about was the fact of 'job hoppers' as you can easily observe it on someone's CV - those people change job every single year; for no obvious reason! Frankly speaking, I wouldn't even consider an IT person like to for an interview. As a company you're investing money and effort into the abilities of your employees. Hiring someone that won't stay for a longer period is out of question. And sorry to say, these kind of IT guys smell fishy about their capabilities and more likely to cause problems than actually produce productive results. One of the reasons why there is a probation period on an employment contract is to give you the liberty to leave as early as possible in case that you don't like your new position. Don't fool yourself or waste other people's time and money by hanging around a full year only to snatch off the bonus payment... Future outlook: Developer's Conference Even though it is not official yet I already mentioned it several times during our weekly Code & Coffee sessions. The MSCC is looking forward to be able to organise or to contribute to an upcoming IT event. Currently, the rough schedule is set for April 2014 but this mainly depends on availability of location(s), a decent time frame for preparations, and the underlying procedures with public bodies to have it approved and so on. As soon as the information about date and location has been fixed there will be a 'Call for Papers' period in order to attract local IT enthusiasts to apply for a session slot and talk about their field of work and their passion in IT. More to come for sure... My resume of the day It was a great gathering and I am very pleased about the fact that we had another 15 craftsmen (plus 2 businessmen on conference call plus 2 young apprentices) in the same room, talking about IT related topics and sharing their experience as employees and students. Personally, I really appreciated the feedback from the students about their current view on their future career, and I really hope that some of them are going to pursue their dreams. Start promoting yourself and it will happen... Looking forward to your blogs! And last but not least our numbers on Meetup and Facebook have been increased as a direct consequence of this meetup. Please, spread the word about the MSCC and get your friends and colleagues to join our official site. The higher the number of craftsmen we have the better chances we have t achieve something great! Thanks!

    Read the article

  • Microsoft, please help me diagnose TFS Administration permission issues!

    - by Martin Hinshelwood
    I recently had a fun time trying to debug a permission issue I ran into using TFS 2010’s TfsConfig. Update 5th March 2010 – In its style of true excellence my company has added rant to its “Suggestions for Better TFS”. <rant> I was trying to run the TfsConfig tool and I kept getting the message: “TF55038: You don't have sufficient privileges to run this tool. Contact your Team Foundation system administrator." This message made me think that it was something to do with the Install permissions as it is always recommended to use a single account to do every install of TFS. I did not install the original TFS on our network and my account was not used to do the TFS2010 install. But I did do the upgrade from 2010 beta 2 to 2010 RC with my current account. So I proceeded to do some checking: Am I in the administrators group on the server? Figure: Yes, I am in the administrators group on the server Am I in the Administration Console users list? Figure: Yes, I am in the Administration Console users list Have I reapplied the permissions in the Administration Console users list ticking all the options? Figure: Make sure you check all of the boxed if you want to have all the admin options Figure: Yes, I have made sure that all my options are correct. Am I in the Team Foundation administrators group? Figure: Yes, I am in the Team Foundation Administrators group Is my account explicitly SysAdmin on the Database server? Figure: Yes, I do have explicit SysAdmin on the database Can you guess what the problem was? The command line window was not running as the administrator! As with most other applications there should be an explicit error message that states: "You are not currently running in administrator mode; please restart the command line with elevated privileges!" This would have saved me 30 minutes, although I agree that I should change my name to Muppet and just be done with it. </rant>   Technorati Tags: Visual Studio ALM,Administration,Team Foundation Server Admin Console,TFS Admin Console

    Read the article

  • SharePoint – The Most Important Feature

    - by Bil Simser
    Watching twitter and doing a search for SharePoint and you see a lot (almost one every few minutes) of tweets about the top 10 new features in SharePoint. What answer do you get when you ask the question, “What’s the most important feature in SharePoint?”. Chances are the answer will vary. Some will say it’s the collaboration aspect, others might say it’s the new ribbon interface, multi-item editing, external content types, faceted search, large list support, document versioning, Silverlight, etc. The list goes on. However I think most people might be missing the most important feature that’s sitting right under their noses all this time. The most important feature of SharePoint? It’s called User Empowerment. Huh? What? Is that something I find in the Site Actions menu? Nope. It’s something that’s always been there in SharePoint, you just need to get the word out and support it. How many times have you had a team ask you for a team site (assuming you had SharePoint up and running). Or to create them a contact list. Or how long have you employed that guy in the corner who’s been copying and pasting content from Corporate Communications into the web from a Word document. Let’s stop the insanity. It doesn’t have to be this way. SharePoint’s strongest feature isn’t anything you can find in the Site Settings screen or Central Admin. It’s all about empowering your users and letting them take control of their content. After all, SharePoint really is a bunch of tools to allow users to collaborate on content isn’t it? So why are you stepping in as IT and helping the user every moment along the way. It’s like having to ask users to fill out a help desk ticket or call up the Windows team to create a folder on their desktop or rearrange their Start menu. This isn’t something IT should be spending their time doing nor is it something the users should be burdened with having to wait until their friendly neighborhood tech-guy (or gal) shows up to help them sort the icons on their desktop. SharePoint IS all about empowerment. Site owners can create whatever lists and libraries they need for their team, and if the template isn’t there they can always turn to my friend and yours, the Custom List. From that can spew forth approval tracking systems, new hire checklists, and server inventory. You’re only limited by your imagination and needs. Users should be able to create new sites as they need. Want a blog to let everyone know what your team is up to? Go create one, here’s how. What’s a blog you ask? Here’s what it is and why you would use one. SharePoint is the shift in the balance of power and you need, and an IT group, let go of certain responsibilities and let your users run with the tools. A power user who knows how to create sites and what features are available to them can help a team go from the forming stage to the storming stage overnight. Again, this all hinges on you as an IT organization and what you can and empower your users with as far as features go. Running with tools is great if you know how to use them, running with scissors not recommended unless you enjoy trips to the hospital. With Great Power comes Great Responsibility so don’t go out on Monday and send out a memo to the organization saying “This Bil guy says you peeps can do anything so here it is, knock yourself out” (for one, they’ll have *no* idea who this Bil guy is). This advice comes with the task of getting your users ready for empowerment. Whether it’s through some kind of internal training sessions, in-house documentation; videos; blog posts; on how to accomplish things in SharePoint, or full blown one-on-one sit downs with teams or individuals to help them through their problems. The work is up to you. Helping them along also should be part of your governance (you do have one don’t you?). Just because you have InfoPath client deployed with your Office suite, doesn’t mean users should just start publishing forms all over your SharePoint farm. There should be some governance behind that in what you’ll support and what is possible. The other caveat to all this is that SharePoint is not everything for everyone. It can’t cook you breakfast and impregnate your cat or solve world hunger. It also isn’t suited for every IT solution out there. It’s a horrible source control system (even though some people try to use it as such) and really can’t do financials worth a darn. Again, governance is key here and part of that governance and your responsibility in setting up and unleashing SharePoint into your organization is to provide users guidance on what should be in SharePoint and (more importantly) what should not be in SharePoint. There are boundaries you have to set where you don’t want your end users going as they might be treading into trouble. Again, this is up to you to set these constraints and help users understand why these pylons are there. If someone understands why they can’t do something they might have a better understanding and respect for those that put them there in the first place. Of course you’ll always have the power-users who want to go skiing down dead mans curve so this doesn’t work for everyone, but you can catch the majority of the newbs who don’t wander aimlessly off the beaten path. At the end of the day when all things are going swimmingly your end users should be empowered to solve the needs they have on a day to day basis and not having to keep bugging the IT department to help them create a view to show only approved documents. I wouldn’t go as far as business users building out full blown solutions and handing the keys to SharePoint Designer or (worse) Visual Studio to power-users might not be a path you want to go down but you also don’t have to lock up the SharePoint system in a tight box where users can’t use what’s there. So stop focusing on the shiny things in SharePoint and maybe consider making a shift to what’s really important. Making your day job easier and letting users get the most our of your technology investment.

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Preview of MSDN Library Changes

    - by ScottGu
    The MSDN team has been working some potential changes to the online MSDN Library designed to help streamline the navigation experience and make it easier to find the .NET Framework information you need. To solicit feedback on the proposed changes while they are still in development, they’ve posted a preview version of some proposed changes to a new MSDN Library Preview site which you can check out.  They’ve also created a survey that leads you through the ideas and asks for your opinions on some of the changes.  We’d very much like to have as many people as possible people take the survey and give us feedback. Quick Preview of Some of the Changes Below are some examples of a few of the changes being proposed: Streamlined .NET Namespaces Navigation The current MSDN Class Library lists all .NET namespaces in a flat-namespace (sorted alphabetically): Two downsides of the above approach are: Some of the least-used namespaces are listed first (like Microsoft.Aspnet.Snapin and Microsoft.Build.BuildEngine) All sub-namespaces are listed, which makes the list a little overwhelming, and page-load times to be slow The new MSDN Library Preview Site now lists “System” namespaces first (since those are the most used), and the home-page lists just top-level namespace groups – which makes it easier to find things, and enables the page to load faster:   Class overview and members pages merged into a single topic about each class Previously you had to navigate to several different pages to find member information about types: Links to these are still available in the MSDN Library Preview Site TOC – but the members are also now listed on the overview page, which makes it easy to quickly find everything in one place: Commonly used things are nearer the top of the page One of the other usability improvements with the new MSDN Library Preview Site is that common elements like “Code Examples” and “Inheritance Hierarchy” (for classes) are now listed near the top of the help page – making them easy to quickly find: Give Us Feedback with a Survey Above are just a few of the changes made with the new MSDN preview site – there are many other changes also rolled into it.  The MSDN team is doing usability studies on the new layout and navigation right now, and would very much like feedback on it. If you have 15 minutes and want to help vote on which of these ideas makes it into the production MSDN site, please visit this survey before June 30, play with the changes a bit, and let the MSDN team know what you think. Important Note: the MSDN preview site is not a fully functional version of MSDN – it’s really only there to preview the new ideas themselves, so please don’t expect it to be integrated with the rest of MSDN, with search, etc.  Once the MSDN team gets feedback on some of the changes being proposed they will roll them into the live site for everyone to use. Hope this helps, Scott

    Read the article

  • UPK Customer Success Story: The City and County of San Francisco

    - by karen.rihs(at)oracle.com
    The value of UPK during an upgrade is a hot topic and was a primary focus during our latest customer roundtable featuring The City and County of San Francisco: Leveraging UPK to Accelerate Your PeopleSoft Upgrade. As the Change Management Analyst for their PeopleSoft 9.0 HCM project (Project eMerge), Jan Crosbie-Taylor provided a unique perspective on how they're utilizing UPK and UPK pre-built content early on to successfully manage change for thousands of city and county employees and retirees as they move to this new release. With the first phase of the project going live next September, it's important to the City and County of San Francisco to 1) ensure that the various constituents are brought along with the project team, and 2) focus on the end user aspects of the implementation, including training. Here are some highlights on how UPK and UPK pre-built content are helping them accomplish this: As a former documentation manager, Jan really appreciates the power of UPK as a single source content creation tool. It saves them time by streamlining the documentation creation process, enabling them to record content once, then repurpose it multiple times. With regard to change management, UPK has enabled them to educate the project team and gain critical buy in and support by familiarizing users with the application early on through User Experience Workshops and by promoting UPK at meetings whenever possible. UPK has helped create awareness for the project, making the project real to users. They are taking advantage of UPK pre-built content to: Educate the project team and subject matter experts on how PeopleSoft 9.0 works as delivered Create a guide/storyboard for their own recording Save time/effort and create consistency by enhancing their recorded content with text and conceptual information from the pre-built content Create PeopleSoft Help for their development databases by publishing and integrating the UPK pre-built content into the application help menu Look ahead to the next release of PeopleTools, comparing the differences to help the team evaluate which version to use with their implemtentation When it comes time for training, they will be utilizing UPK in the classroom, eliminating the time and cost of maintaining training databases. Instructors will be able to carry all training content on a thumb drive, allowing them to easily provide consistent training at their many locations, regardless of the environment. Post go-live, they will deploy the same UPK content to provide just-in-time, in-application support for the entire system via the PeopleSoft Help menu and their PeopleSoft Enterprise Portal. Users will already be comfortable with UPK as a source of help, having been exposed to it during classroom training. They are also using UPK for a non-Oracle application called JobAps, an online job application solution used by many government organizations. Jan found UPK's object recognition to be excellent, yet it's been incredibly easy for her to change text or a field name if needed. Please take time to listen to this recording. The City and County of San Francisco's UPK story is very exciting, and Jan shared so many great examples of how they're taking advantage of UPK and UPK pre-built content early on in their project. We hope others will be able to incorporate these into their projects. Many thanks to Jan for taking the time to share her experiences and creative uses of UPK with us! - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Urban Turtle is such an awesome product !

    - by Vincent Grondin
    Mario Cardinal, the host of the Visual Studio Talk Show, is quite happy these days. He works with the Urban Turtle team and they received significant support from Microsoft. Brian Harry, who is the Product Unit Manager for Team Foundation Server, has published an outstanding blog post about Urban Turtle that says: "...awesome Scrum experience for TFS.” You can read Brian Harry's blog post at the following URL: http://urbanturtle.com/awesome.

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • App Engine & Cloud SQL

    App Engine & Cloud SQL We'll quickly review Cloud SQL and chat with members of the Cloud SQL team about the newest features / tips & tricks. There will also be a Q&A session so please enter any questions you might have for the team in the moderator list for this session at www.google.com From: GoogleDevelopers Views: 1501 26 ratings Time: 36:26 More in Science & Technology

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 23-29, 2013

    - by Bob Rhubart
    2,947 people now follow OTN ArchBeat on Facebook. Here are the Top 10 items shared on that page for June 23-29, 2013. Podcast Show Notes: DevOps, Cloud, and Role Creep After some confusion (my bad) all three CORRECT parts of this podcast are now available. The panelists for this discussion are all Oracle ACE Directors: Ron Batra, Basheer Khan, and Cary Millsap. SOA Suite 11g Developers Cookbook Published | Antony Reynolds "The book focuses on areas that we felt we had neglected in the Developers Guide, says co-author Antony Reynolds. "There is more about Java integration and OSB, both of which we see a lot of questions about when working with customers." Using Oracle TimesTen With Oracle BI Applications (Part 2) | Peter Scott Peter Scott follows up an earlier post with a look at some of the OBIA structures and a discussion of some of the features of TimesTen. Linux-Containers — Part 1: Overview | Lenz Grimmer OTN Garage blogger Lenz Grimmer kicks off a series and expands your mind with deep detail on Linux Containers Slides from my ODTUG Kscope13 Presentation | Zeeshan Baig Oracle ACE Zeeshan Baig shares the slides from his KScope13 presentation, "Build Your Business Services Using ADF Task Flows." Fun with Enterprise Manager | Rene van Wijk Oracle ACE Rene van Wijk shares some background and some tuning and other tech tips for working with Oracle Enterprise Manager. Using VirtualBox to test drive Windows Blue | The Fat Bloke The Fat Bloke shares a tech tip for those interested in giving Windows Blue a try on Virtual Box. Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture In this three-part series Oracle Fusion Middleware A-Team members Jennifer Briscoe, Clifford Musante, Mikael Ottosson, and Pardha Reddy talk about the origins and mission of the FMW A-Team and about the great technical content you'll find on the recently launched Oracle A-Team blog. Part one is now available. 5 Best Practices - Laying the Foundation for WebCenter Projects | John Brunswick Oracle WebCenter expert John Brunswick shares best practices that "enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration." Oracle Magazine - July/Aug 2013 The digital edition of the July/August edition of Oracle Magazine is now available. This issue includes my architect community column, "The CX Factor." which features insight from community members on "why and how CX has become a significant factor in enterprise IT." h

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >