Search Results

Search found 16094 results on 644 pages for 'it helpdesk team manager'.

Page 101/644 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Change of domain deleted data in Team Foundation Server?

    - by glumesc
    Dear All, Maybe my google-fu is failing me, but I cannot seem to find any information on the following: My Windows user account was recently moved, accidentally, to another domain in my company's Active Directory. While in the other domain, I was unable to access my data stored in TFS 2008 (e.g. shelvesets, pending changes, workspaces, etc). I assume this was because it was associated with my ORIGINALDOMAIN\userId account, rather than NEWDOMAIN\userID account. My account has now been moved back to ORIGINALDOMAIN, however I still cannot see any of my data in TFS. In fact, it appears that all of my data (all my shelvesets!) have been deleted. It is almost as if TFS saw that my userId had disappeared from ORIGINALDOMAIN and assumed that I had been "deleted" and thus deleted all my data. Has anybody else encountered this? Is there hope for my data or am I royally stuffed? Thanks in advance, Steve

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • Is there a server side API for Team Foundation Server?

    - by Ralph Shillington
    It would seem there is precious little documentation on programming against a TFS 2010 instance. What bits I have found, have next to nothing in the case of documentation beyond barebones listing of client access classes and their members, most likely autogenerated from the code comments. As I'm interested in building a silverlight client against TFS, I will need access to a server side API. Ideally the silverlight app will talk to my server (mainly for work items) and my server will in turn talk to the TFS server for the goods. Where is the doumentation (if any) for this kind of TFS integration?

    Read the article

  • How can I persuade our team to use mac as our development platform?

    - by Allen Bargi
    I'm joining a start up and there's a debate on which platform we should choose for our development; Mac, Linux or Windows? We're gonna use mainly open source tools and languages like, ruby, rails, PHP and mysql and photoshop (or something equivalent on Linux) for design. I suggested mac but they challenged me to be more persuasive. what's your idea? Help me with your persuasive arguments please.

    Read the article

  • How can I access the "through" object of a Django ManyToManyField?

    - by Macha
    I have the following models in my Django app. How can I from the Team model find all the User objects who have accepted as True in the Membership model? I know I need to use Team.objects.filter(), but I'm not sure how to check the value of the accepted field. from django.contrib.auth.models import User class Team(models.Model): members = models.ManyToManyField(User, through="Membership") class Membership(models.Model): user = models.ForeignKey(User) team = models.ForeignKey(Team) accepted = models.BooleanField(default=False)

    Read the article

  • How to manage css of big websites within team environment without mess?

    - by jitendra
    Where multiple people can work on same css. is it possible to follow semantic name rules even in large websites. If I would write all main css first time with semantic names . then what and how i should guideline/instruction to other developer to maintain css readability, validation . and to know quickly where other are adding their own css if required. Right now every one just go to down and write required css classes ot IDs at bottom. and most of the time they don't write semantic names.

    Read the article

  • Why do I get a Null Pointer Exception?

    - by Roman
    I have this code: Manager manager = new Manager("Name"); MyWindowListener windowListener = new MyWindowListener(); manager.addWindowListener(windowListener); Eclipse writes that I have a NullPointerException in the last line. What can be the reason for that. I do have constructors in the Manager and MyWindowListener. If it's important MyWindowListener implements WindowListener.

    Read the article

  • XML\Jquery create listings based on user selection

    - by Sirius Mane
    Alright, so what I need to try and accomplish is having a static web page that will display information pulled from an XML document and render it to the screen without refreshing. Basic AJAX stuff I guess. The trick is, as I'm trying to think this through I keep coming into 'logical' barriers mentally. Objectives: -Have a chart which displays baseball team names, wins, losses, ties. In my XML doc there is a 'pending' status, so games not completed should not be displayed.(Need help here) -Have a selection list which allows you to select a team which is populated from XML doc. (done) -Upon selecting a particular team from the aforementioned selection list the page should display in a separate area all of the planned games for that team. Including pending. Basically all of the games associated with that team and the dates (which is included in the XML file). (Need help here) What I have so far: HTML\JS <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="stylesheet" href="batty.css" type="text/css" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Little Batty League</title> <script type="text/javascript" src="library.js"></script> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> var IE = window.ActiveXObject ? true: false; var MOZ = document.implementation.createDocument ? true: false; $(document).ready(function(){ $.ajax({ type: "GET", url: "schedule.xml", dataType: "xml", success: function(xml) { var select = $('#mySelect'); $(xml).find('Teams').each(function(){ var title = $(this).find('Team').text(); select.append("<option/><option class='ddheader'>"+title+"</option>"); }); select.children(":first").text("please make a selection").attr("selected",true); } }); }); </script> </script> </head> <body onLoad="init()"> <!-- container start --> <div id="container"> <!-- banner start --> <div id="banner"> <img src="images/mascot.jpg" width="324" height="112" alt="Mascot" /> <!-- buttons start --> <table width="900" border="0" cellpadding="0" cellspacing="0"> <tr> <td><div class="menuButton"><a href="index.html">Home</a></div></td> <td><div class="menuButton"><a href="schedule.html">Schedule</a></div></td> <td><div class="menuButton"><a href="contact.html">Contact</a></div></td> <td><div class="menuButton"><a href="about.html">About</a></div></td> </tr> </table> <!-- buttons end --> </div> <!-- banner end --> <!-- content start --> <div id="content"> <br /> <form> <select id="mySelect"> <option>please make a selection</option> </select> </form> </div> <!-- content end --> <!-- footer start --> <div id="footer"> &copy; 2012 Batty League </div> <!-- footer end --> </div> <!-- container end --> </body> </html> And the XML is: <?xml version="1.0" encoding="utf-8"?> <Schedule season="1"> <Teams> <Team>Bluejays</Team> </Teams> <Teams> <Team>Chickens</Team> </Teams> <Teams> <Team>Lions</Team> </Teams> <Teams> <Team>Pixies</Team> </Teams> <Teams> <Team>Zombies</Team> </Teams> <Teams> <Team>Wombats</Team> </Teams> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-10T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays </Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-01-11T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-18T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-19T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-01-21T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-01-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-27T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-30T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-31T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-10T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-12T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies </Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-14T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-15T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-16T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-24T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Pixies</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-26T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-27T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> </Schedule> If anybody can point me to Jquery code\modules that would greatly help me with this I'd be appreciate. Any help right now would be great, I'm just banging my head against a wall. I'm trying to avoid using XSLT transforms because I absolutely despise XML and I'm not good with it. So I'd -like- to just use Javascript\PHP\etc with only a sprinkling of the necessary XML where possible..

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • How do you educate your teammates without seeming condescending or superior?

    - by Dan Tao
    I work with three other guys; I'll call them Adam, Brian, and Chris. Adam and Brian are bright guys. Give them a problem; they will figure out a way to solve it. When it comes to OOP, though, they know very little about it and aren't particularly interested in learning. Pure procedural code is their MO. Chris, on the other hand, is an OOP guy all the way -- and a cocky, condescending one at that. He is constantly criticizing the work Adam and Brian do and talking to me as if I must share his disdain for the two of them. When I say that Adam and Brian aren't interested in learning about OOP, I suspect Chris is the primary reason. This hasn't bothered me too much for the most part, but there have been times when, looking at some code Adam or Brian wrote, it has pained me to think about how a problem could have been solved so simply using inheritance or some other OOP concept instead of the unmaintainable mess of 1,000 lines of code that ended up being written instead. And now that the company is starting a rather ambitious new project, with Adam assigned to the task of getting the core functionality in place, I fear the result. Really, I just want to help these guys out. But I know that if I come across as just another holier-than-thou developer like Chris, it's going to be massively counterproductive. I've considered: Team code reviews -- everybody reviews everybody's code. This way no one person is really in a position to look down on anyone else; besides, I know I could learn plenty from the other members on the team as well. But this would be time-consuming, and with such a small team, I have trouble picturing it gaining much traction as a team practice. Periodic e-mails to the team -- this would entail me sending out an e-mail every now and then discussing some concept that, based on my observation, at least one team member would benefit from learning about. The downside to this approach is I do think it could easily make me come across as a self-appointed expert. Keeping a blog -- I already do this, actually; but so far my blog has been more about esoteric little programming tidbits than straightforward practical advice. And anyway, I suspect it would get old pretty fast if I were constantly telling my coworkers, "Hey guys, remember to check out my new blog post!" This question doesn't need to be specifically about OOP or any particular programming paradigm or technology. I just want to know: how have you found success in teaching new concepts to your coworkers without seeming like a condescending know-it-all? It's pretty clear to me there isn't going to be a sure-fire answer, but any helpful advice (including methods that have worked as well as those that have proved ineffective or even backfired) would be greatly appreciated. UPDATE: I am not the Team Lead on this team. Chris is. UPDATE 2: Made community wiki to accord with the general sentiment of the community (fancy that).

    Read the article

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • Confused about modifying the sprint backlog during a sprint

    - by Maltiriel
    I've been reading a lot about scrum lately, and I've found what seem to me to be conflicting information about whether or not it's ok to change the sprint backlog during a sprint. The Wikipedia article on scrum says it's not ok, and various other articles say this as well. Also my Software Development professor taught the same thing during an overview of scrum. However, I read Scrum and XP from the Trenches and that describes a section for unplanned items on the taskboard. So then I looked up the Scrum Guide and it says that during the sprint "No changes are made that would affect the Sprint Goal" and in the discussion of the Sprint Goal "If the work turns out to be different than the Development Team expected, then they collaborate with the Product Owner to negotiate the scope of Sprint Backlog within the Sprint." It goes on to say in the discussion of the Sprint Backlog: The Sprint Backlog is a plan with enough detail that changes in progress can be understood in the Daily Scrum. The Development Team modifies Sprint Backlog throughout the Sprint, and the Sprint Backlog emerges during the Sprint. This emergence occurs as the Development Team works through the plan and learns more about the work needed to achieve the Sprint Goal. As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real-time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team. So at this point I'm altogether confused. Thinking about it, it makes more sense to me to take the second approach. The individual, specific items in the backlog don't seem to me to be the most important thing, but rather the sprint goal, so not changing the sprint goal but being able to change the backlog makes sense. For instance if both the product owner and the team thought they were on the same page about a story, but as the sprint progressed they figured out there was a misunderstanding, it seems like it makes sense to change the tasks that make up that story accordingly. Or if there was some story or task that was forgotten about, but is required to reach the sprint goal, I would think it would be best to add the story or task to the backlog during the sprint. However, there are a lot of people who seem quite adamant that any change to the sprint backlog is not ok. Am I misunderstanding that position somehow? Are those folks defining the sprint backlog differently somehow? My understanding of the sprint backlog is that it consists of both the stories and the tasks they're broken down into. Anyway I would really appreciate input on this issue. I'm trying to figure out both what the idealistic scrum approach is to changing the sprint backlog during a sprint, and whether people who use scrum successfully for development allow changing the sprint backlog during a sprint.

    Read the article

  • default outlook calendar update with holidays

    - by New IT Manager
    We are currently on a SBS2003 with exchange + outlook 2010 client environment. we have some developers working across 3 countries connected to the same domain however i would like all three country holidays to appear in everyone's default calendar. i did try several options synchronizing back to everyones calendar and well it was not so helpful. any thoughts would be much appreciated.. thanks in advance folds :)

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Importing owl files

    - by Mikae Combarado
    Hello, I have a problem with importing owl files using owl api in Java. I successfully can import 2 owl files. However, a problem occurs, when I try to import 3 or more owl files that are integrated to each other. E.g. Base.owl -- base ontology Electronics.owl -- electronics ontology which imports Base.owl Telephone.owl -- telephone ontology which imports Base.owl and Electronics.owl When, I just import Base.owl and run Electronics.owl, it works smoothly. The code is given below: File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); OWLOntology ont = manager.loadOntologyFromOntologyDocument(fileElectronic); However, when I want to load Telephone.owl, I just create an additional iriMapper and add it to the manager. The additional code is shown with ** : File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); **File fileTelephone = new File("filePath/Telephone.owl");** SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); **SimpleIRIMapper iriMapper2 = new SimpleIRIMapper(IRI.create("url/Electronic.owl"), IRI.create(fileElectronic));** OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); **manager.addIRIMapper(iriMapper2);** OWLOntology ont = manager.loadOntologyFromOntologyDocument(**fileTelephone**); The code shown above gives this error : Could not load import: Import(url/Electronic.owl>) Reason: Could not loaded imported ontology: <url/Base.owl> Cause: null It would be really appreciated, if someone gives me a hand... Thanks in advance...

    Read the article

  • Active Record two belongs_to calls or single table inheritance

    - by ethyreal
    In linking a sports event to two teams, at first this seemed to make sense: events - id:integer - integer:home_team_id - integer:away_team_id teams - integer:id - string:name However I am troubled by how I would link that up in the active record model: class Event belongs_to :home_team, :class_name => 'Team', :foreign_key => "home_team_id" belongs_to :away_team, :class_name => 'Team', :foreign_key => "away_team_id" end Is that the best solution? In an answer to a similar question I was pointed to single table inheritance, and then later found polymorphic associations. Neither of which seemed to fit this association. Perhaps I am looking at this wrong, but I see no need to subclass a team into home and away teams since the distinction is only in where the game is played. If I did go with single table inheritance I wouldn't want each team to belong_to an event so would this work? # app/models/event.rb class Event < ActiveRecord::Base belongs_to :home_team belongs_to :away_team end # app/models/team.rb class Team < ActiveRecord::Base has_many :teams end # app/models/home_team.rb class HomeTeam < Team end # app/models/away_team.rb class AwayTeam < Team end I thought also about a has_many through association but that seems two much as I will only ever need two teams, but those two teams don't belong to any one event. event_teams - integer:event_id - integer:team_id - boolean:is_home Is there a cleaner more semantic way for making these associations in active record? or is one of these solutions the best choice? Thanks

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • MySQL: Complex Join Statement involving two tables and a third correlation table

    - by Stephen
    I have two tables that were built for two disparate systems. I have records in one table (called "leads") that represent customers, and records in another table (called "manager") that are the exact same customers but "manager" uses different fields (For example, "leads" contains an email address, and "manager" contains two fields for two different emails--either of which might be the email from "leads"). So, I've created a correlation table that contains the lead_id and manager_id. currently this correlation table is empty. I'm trying to query the "leads" table to give me records that match either "manager" email field with the single "leads" email field, while at the same time ignoring fields that have already been added to the "correlated" table. (this way I can see how many leads that match have not yet been correlated.) Here's my current, invalid SQL attempt: SELECT leads.id, manager.id FROM leads, manager LEFT OUTER JOIN correlation ON correlation.lead_id = leads.id WHERE correlation.id IS NULL AND leads.project != "someproject" AND (manager.orig_email = leads.email OR manager.dest_email = leads.email) AND leads.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-10 23:59:59' ORDER BY leads.created ASC; I get the error: Unknown column 'leads.id' in 'on clause' Before you wonder: there are records in the "leads" table where leads.project != "someproject" and leads.created falls between those dates. I've included those additional parameters for completeness.

    Read the article

  • How to get to the key name of a referenced entity property from an entity instance without a datastore read in google app engine?

    - by Sumeet Pareek
    Consider I have the following models - class Team(db.Model): # say I have just 5 teams name = db.StringProperty() class Player(db.Model): # say I have thousands of players name = db.StringProperty() team = db.ReferenceProperty(Team, collection_name="player_set") Key name for each Team entity = 'team_' , and for each Player entity = 'player_' By some prior arrangement I have a Team entity's (key_name, name) mapping available to me. For example (team_01, United States Of America), (team_02, Russia) etc I have to show all the players and their teams on a page. One way of doing this would be - players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(player.team.name) #This is a total of 1x1000 = 1000 DB reads That is a 1001 DB reads for a silly thing. The interesting part is that when I do a db.to_dict() on players, it shows that for every player in that list there is 'name' of the player and there is the 'key_name' of the team available too. So how can I do the below ?? players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(team_list[player.<SOME WAY OF GETTING TEAM KEY NAME>]) # Here 'team_list' already has (key_name, name) for all 5 teams I have been struggling with this for a long time. Have read every available documentation. I could just hug the person that can help me here :-) Disclaimer: The above problem description is not a real scenario. It is a simplified arrangement that represents my problem exactly. I have run into it in a rater complex and big GAE appication.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >