Search Results

Search found 15266 results on 611 pages for 'young people'.

Page 455/611 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • mysql querying age from dob

    - by Confused
    My MySQL database has date of birth stored in the format yyyy-mm-dd, how do I query the database - without using php code - to show me results of people born within x years of now, and/or, who are y years of age? The table is called users, the colum is called dob. I've spent a while viewing the MySQL manual but I can't work out precisely how to form my query. So far I went for select * from users where dob... but I don't think that's the way to go.

    Read the article

  • C# design question (Connections)

    - by David
    Hello, I would like to hear your suggestions on kind of design problem which I have in c#. So, I am making a program where people can meet and draw in the same window over the internet or LAN. I am drawing into a bitmap and than I set it to a pictureBox component. I have a hard time to decide how to send updates to each user, what is the best way to do it. Should I send coordinates of mouse and than do the drawing on each users screen or stream the image to each. Maybe you know better solution to keep it synchronized and efficient. Thank you.

    Read the article

  • start javascript code with $(function, etc

    - by YAmikep
    I am studying Backbone and the todo example apps from http://todomvc.com/ I have noticed there are 3 severals ways of starting the code in the files: $(function() { // code here }); $(function( $ ) { // code here }); (function() { // code here }()); I do not understand the differences and when I should use one over the other. I also saw some people using this to start their code: $(document).ready(function(){ // code here }); From what I have seen, this is the full way of writing it right? In a more general way, should I always include my javascript code into something like that in each files? Thanks for your advice.

    Read the article

  • Are instance initializers good or bad?

    - by berry120
    I personally quite like instance initializers - I use them to assign default values to things such as collections so when writing constructors I don't have to remember to assign them the same default values each time. It seems quite elegant to me - avoids annoying NPE's popping up and avoids duplicate code. A private method doesn't seem as nice because a) it can't assign values to final fields, b) it could be run elsewhere in code and c) the method still needs to be explicitly called at the start of each constructor. However, the flip side with others I have spoken to is that they're confusing, some people reading the code might not understand what they do or when they're called and thus they could cause more problems than they solve. Are proper use of these initializers something to be encouraged or avoided? Or is it an "each to their own" case?

    Read the article

  • What would be the most efficient way to do this search (mysql or text)?

    - by alex
    Suppose I have 500 rows of data, each with a paragraph of text (like this paragraph). That's it.I want to do a search that is not only based on words. (%LIKE%, not FULL_TEXT) What would be faster? SELECT * FROM ...WHERE LIKE "%query%"; This would put load on the database server. Select all. Then, go through each one and do .find = 0 This would put load on the web server. This is a website, and people will be searching frequently.

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • Use of .apply() with 'new' operator. Is this possible?

    - by Premasagar
    In JavaScript, I want to create an object instance (via the new operator), but pass an arbitrary number of arguments to the constructor. Is this possible? What I want to do is something like this (but the code below does not work): function Something(){ // init stuff } function createSomething(){ return new Something.apply(null, arguments); } var s = createSomething(a,b,c); // 's' is an instance of Something The Answer From the responses here, it became clear that there's no in-built way to call .apply() with the new operator. However, people suggested a number of really interesting solutions to the problem. My preferred solution was this one from Matthew Crumley (I've modified it to pass the arguments property): var createSomething = (function() { function F(args) { return Something.apply(this, args); } F.prototype = Something.prototype; return function() { return new F(arguments); } })();

    Read the article

  • How to resubmit PHP form with javascript

    - by user866339
    I am wondering if it is possible to to resubmit a form with a button click that calls up a javascript command. So this is basically what I'm trying to do - page 1: form; action = page2.php page 2: generate a randomized list according to parameters set by page 1 I would like to place a button on page 2 so that on click, it would be as if the user has hit F5, and a new list would be generated with the same parameters. I found a lot of help on Google with people trying NOT to get this to happen, but I'm not sure how to actually get it to happen..... Thank you!

    Read the article

  • Subtyping and assignment in Java

    - by Danrex
    Arghh I just know people are going to hate me for asking this... I was just playing around with inheritance and I noticed you can instantiate a subclass object in one of two ways when you write code. So then I wondered if there is any functional difference between these two methods. So in the code below, does this produce the exact same result...a MountainBike object, or is there some difference I should know about? Bicycle is the superclass for this example. If I do Bicycle bike or MountainBike bike I am effectively making a MountainBike due to new MountainBike()? So basically the difference is just semantics at this point? Bicycle bike = new MountainBike(); MountainBike bike = new MountainBike();

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • How to get full control of umask/PAM/permissions?

    - by plua
    OUR SITUATION Several people from our company log in to a server and upload files. They all need to be able to upload and overwrite the same files. They have different usernames, but are all part of the same group. However, this is an internet server, so the "other" users should have (in general) just read-only access. So what I want to have is these standard permissions: files: 664 directories: 771 My goal is that all users do not need to worry about permissions. The server should be configured in such a way that these permissions apply to all files and directories, newly created, copied, or over-written. Only when we need some special permissions we'd manually change this. We upload files to the server by SFTP-ing in Nautilus, by mounting the server using sshfs and accessing it in Nautilus as if it were a local folder, and by SCP-ing in the command line. That basically covers our situation and what we aim to do. Now, I have read many things about the beautiful umask functionality. From what I understand umask (together with PAM) should allow me to do exactly what I want: set standard permissions for new files and directories. However, after many many hours of reading and trial-and-error, I still do not get this to work. I get many unexpected results. I really like to get a solid grasp of umask and have many question unanswered. I will post these questions below, together with my findings and an explanation of my trials that led to these questions. Given that many things appear to go wrong, I think that I am doing several things wrong. So therefore, there are many questions. NOTE: I am using Ubuntu 9.10 and therefore can not change the sshd_config to set the umask for the SFTP server. Installed SSH OpenSSH_5.1p1 Debian-6ubuntu2 < required OpenSSH 5.4p1. So here go the questions. 1. DO I NEED TO RESTART FOR PAM CHANGS TO TAKE EFFECT? Let's start with this. There were so many files involved and I was unable to figure out what does and what does not affect things, also because I did not know whether or not I have to restart the whole system for PAM changes to take effect. I did do so after not seeing the expected results, but is this really necessary? Or can I just log out from the server and log back in, and should new PAM policies be effective? Or is there some 'PAM' program to reload? 2. IS THERE ONE SINGLE FILE TO CHANGE THAT AFFECTS ALL USERS FOR ALL SESSIONS? So I ended up changing MANY files, as I read MANY different things. I ended up setting the umask in the following files: ~/.profile -> umask=0002 ~/.bashrc -> umask=0002 /etc/profile -> umask=0002 /etc/pam.d/common-session -> umask=0002 /etc/pam.d/sshd -> umask=0002 /etc/pam.d/login -> umask=0002 I want this change to apply to all users, so some sort of system-wide change would be best. Can it be achieved? 3. AFTER ALL, THIS UMASK THING, DOES IT WORK? So after changing umask to 0002 at every possible place, I run tests. ------------SCP----------- TEST 1: scp testfile (which has 777 permissions for testing purposes) server:/home/ testfile 100% 4 0.0KB/s 00:00 Let's check permissions: user@server:/home$ ls -l total 4 -rwx--x--x 1 user uploaders 4 2011-02-05 17:59 testfile (711) ---------SSH------------ TEST 2: ssh server user@server:/home$ touch anotherfile user@server:/home$ ls -l total 4 -rw-rw-r-- 1 user uploaders 0 2011-02-05 18:03 anotherfile (664) --------SFTP----------- Nautilus: sftp://server/home/ Copy and paste newfile from client to server (777 on client) TEST 3: user@server:/home$ ls -l total 4 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 newfile (777) Create a new file through Nautilus. Check file permissions in terminal: TEST 4: user@server:/home$ ls -l total 4 -rw------- 1 user uploaders 0 2011-02-05 18:06 newfile (600) I mean... WHAT just happened here?! We should get 644 every single time. Instead I get 711, 777, 600, and then once 644. And the 644 is only achieved when creating a new, blank file through SSH, which is the least probable scenario. So I am asking, does umask/pam work after all? 4. SO WHAT DOES IT MEAN TO UMASK SSHFS? Sometimes we mount a server locally, using sshfs. Very useful. But again, we have permissions issues. Here is how we mount: sshfs -o idmap=user -o umask=0113 user@server:/home/ /mnt NOTE: we use umask = 113 because apparently, sshfs starts from 777 instead of 666, so with 113 we get 664 which is the desired file permission. But what now happens is that we see all files and directories as if they are 664. We browse in Nautilus to /mnt and: Right click - New File (newfile) --- TEST 5 Right click - New Folder (newfolder) --- TEST 6 Copy and paste a 777 file from our local client --- TEST 7 So let's check on the command line: user@client:/mnt$ ls -l total 8 -rw-rw-r-- 1 user 1007 3 Feb 5 18:05 copyfile (664) -rw-rw-r-- 1 user 1007 0 Feb 5 18:15 newfile (664) drw-rw-r-- 1 user 1007 4096 Feb 5 18:15 newfolder (664) But hey, let's check this same folder on the server-side: user@server:/home$ ls -l total 8 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 copyfile (777) -rw------- 1 user uploaders 0 2011-02-05 18:15 newfile (600) drwx--x--x 2 user uploaders 4096 2011-02-05 18:15 newfolder (711) What?! The REAL file permissions are very different from what we see in Nautilus. So does this umask on sshfs just create a 'filter' that shows unreal file permissions? And I tried to open a file from another user but the same group that had real 600 permissions but 644 'fake' permissions, and I could still not read this, so what good is this filter?? 5. UMASK IS ALL ABOUT FILES. BUT WHAT ABOUT DIRECTORIES? From my tests I can see that the umask that is being applied also somehow influences the directory permissions. However, I want my files to be 664 (002) and my directories to be 771 (006). So is it possible to have a different umask for directories? 6. PERHAPS UMASK/PAM IS REALLY COOL, BUT UBUNTU IS JUST BUGGY? On the one hand, I have read topics of people that have had success with PAM/UMASK and Ubuntu. On the other hand, I have found many older and newer bugs regarding umask/PAM/fuse on Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gdm/+bug/241198 https://bugs.launchpad.net/ubuntu/+source/fuse/+bug/239792 https://bugs.launchpad.net/ubuntu/+source/pam/+bug/253096 https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/549172 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314796 So I do not know what to believe anymore. Should I just give up? Would ACL solve all my problems? Or do I have again problems using Ubuntu? One word of caution with backups using tar. Red Hat /Centos distributions support acls in the tar program but Ubuntu does not support acls when backing up. This means that all acls will be lost when you create a backup. I am very willing to upgrade to Ubuntu 10.04 if that would solve my problems too, but first I want to understand what is happening.

    Read the article

  • urgent help needed to convert arabic html to pdf

    - by Mariam
    <div> <table border="1" width="500px"> <tr> <td colspan="2"> aspdotnetcodebook ????? ???????</td> </tr> <tr> <td> cell1 </td> <td> cell2 </td> </tr> <tr> <td colspan="2"> <asp:Label ID="lblLabel" runat="server" Text=""></asp:Label> <img alt="" src="logo.gif" style="width: 174px; height: 40px" /></td> </tr> <tr> <td colspan="2" dir="rtl"> <h1> <img alt="" height="168" src="http://a.cksource.com/c/1/inc/img/demo-little-red.jpg" style="margin-left: 10px; margin-right: 10px; float: left;" width="120" />????? ????? ??? ??? ?? ?? ??</h1> <p> &quot;<b>Little Red Riding Hood</b>&quot; is a famous <a href="http://en.wikipedia.org/wiki/Fairy_tale" title="Fairy tale">fairy tale</a> about a young girl&#39;s encounter with a wolf. The story has been changed considerably in its history and subject to numerous modern adaptations and readings.</p> <table align="right" border="1" cellpadding="1" cellspacing="1" style="width: 200px;"> <caption> <strong>International Names</strong></caption> <tr> <td> ????? ???????</td> <td> &nbsp;</td> </tr> <tr> <td> Italian</td> <td> <i>Cappuccetto Rosso</i></td> </tr> <tr> <td> Spanish</td> <td> <i>Caperucita Roja</i></td> </tr> </table> <p> The version most widely known today is based on the <a href="http://en.wikipedia.org/wiki/Brothers_Grimm" title="Brothers Grimm"> Brothers Grimm</a> variant. It is about a girl called Little Red Riding Hood, after the red <a href="http://en.wikipedia.org/wiki/Hood_(headgear%2529" title="Hood (headgear)">hooded</a> <a href="http://en.wikipedia.org/wiki/Cape" title="Cape">cape</a> or <a href="http://en.wikipedia.org/wiki/Cloak" title="Cloak">cloak</a> she wears. The girl walks through the woods to deliver food to her sick grandmother.</p> <p> A wolf wants to eat the girl but is afraid to do so in public. He approaches the girl, and she naïvely tells him where she is going. He suggests the girl pick some flowers, which she does. In the meantime, he goes to the grandmother&#39;s house and gains entry by pretending to be the girl. He swallows the grandmother whole, and waits for the girl, disguised as the grandmother.</p> <p> When the girl arrives, she notices he looks very strange to be her grandma. In most retellings, this eventually culminates with Little Red Riding Hood saying, &quot;My, what big teeth you have!&quot;<br /> To which the wolf replies, &quot;The better to eat you with,&quot; and swallows her whole, too.</p> <p> A <a href="http://en.wikipedia.org/wiki/Hunter" title="Hunter">hunter</a>, however, comes to the rescue and cuts the wolf open. Little Red Riding Hood and her grandmother emerge unharmed. They fill the wolf&#39;s body with heavy stones, which drown him when he falls into a well. Other versions of the story have had the grandmother shut in the closet instead of eaten, and some have Little Red Riding Hood saved by the hunter as the wolf advances on her rather than after she is eaten.</p> <p> The tale makes the clearest contrast between the safe world of the village and the dangers of the <a href="http://en.wikipedia.org/wiki/Enchanted_forest" title="Enchanted forest">forest</a>, conventional antitheses that are essentially medieval, though no written versions are as old as that.</p> </td> </tr> </table> </div> i use itextsharp to convert this content which is stored in DB to pdf file to be downloaded to the user i cant achieve this

    Read the article

  • Array help Index out of range exeption was unhandled

    - by Michael Quiles
    I am trying to populate combo boxes from a text file using comma as a delimiter everything was working fine, but now when I debug I get the "Index out of range exeption was unhandled" warning. I guess I need a fresh pair of eyes to see where I went wrong, I commented on the line that gets the error //Fname = fields[1]; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Printing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace Sullivan_Payroll { public partial class xEmpForm : Form { bool complete = false; public xEmpForm() { InitializeComponent(); } private void xEmpForm_Resize(object sender, EventArgs e) { this.xCenterPanel.Left = Convert.ToInt16((this.Width - this.xCenterPanel.Width) / 2); this.xCenterPanel.Top = Convert.ToInt16((this.Height - this.xCenterPanel.Height) / 2); Refresh(); } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { //Exits the application this.Close(); } private void xEmpForm_FormClosing(object sender, FormClosingEventArgs e) //use this on xtrip calculator { DialogResult Response; if (complete == true) { Application.Exit(); } else { Response = MessageBox.Show("Are you sure you want to Exit?", "Exit", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2); if (Response == DialogResult.No) { complete = false; e.Cancel = true; } else { complete = true; Application.Exit(); } } } private void xEmpForm_Load(object sender, EventArgs e) { //file sources string fileDept = "source\\Department.txt"; string fileSex = "source\\Sex.txt"; string fileStatus = "source\\Status.txt"; if (File.Exists(fileDept)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileDept)) { string dept = ""; while ((dept = sr.ReadLine()) != null) { this.xDeptComboBox.Items.Add(dept); } } } else { MessageBox.Show("The Department file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileSex)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileSex)) { string sex = ""; while ((sex = sr.ReadLine()) != null) { this.xSexComboBox.Items.Add(sex); } } } else { MessageBox.Show("The Sex file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileStatus)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileStatus)) { string status = ""; while ((status = sr.ReadLine()) != null) { this.xStatusComboBox.Items.Add(status); } } } else { MessageBox.Show("The Status file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } private void xFileSaveMenuItem_Click(object sender, EventArgs e) { { const string fileNew = "source\\New Staff.txt"; string recordIn; FileStream outFile = new FileStream(fileNew, FileMode.Create, FileAccess.Write); StreamWriter writer = new StreamWriter(outFile); for (int count = 0; count <= this.xEmployeeListBox.Items.Count - 1; count++) { this.xEmployeeListBox.SelectedIndex = count; recordIn = this.xEmployeeListBox.SelectedItem.ToString(); writer.WriteLine(recordIn); } writer.Close(); outFile.Close(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; MessageBox.Show("your file is saved"); } } private void xViewFacultyMenuItem_Click(object sender, EventArgs e) { const string fileStaff = "source\\Staff.txt"; const char DELIM = ','; string Lname, Fname, Depart, Stat, Sex, Salary, cDept, cStat, cSex; double Gtotal; string recordIn; string[] fields; cDept = this.xDeptComboBox.SelectedItem.ToString(); cStat = this.xStatusComboBox.SelectedItem.ToString(); cSex = this.xSexComboBox.SelectedItem.ToString(); FileStream inFile = new FileStream(fileStaff, FileMode.Open, FileAccess.Read); StreamReader reader = new StreamReader(inFile); recordIn = reader.ReadLine(); while (recordIn != null) { fields = recordIn.Split(DELIM); Lname = fields[0]; Fname = fields[1]; // this is where the error appears Depart = fields[2]; Stat = fields[3]; Sex = fields[4]; Salary = fields[5]; Fname = fields[1].TrimStart(null); Depart = fields[2].TrimStart(null); Stat = fields[3].TrimStart(null); Sex = fields[4].TrimStart(null); Salary = fields[5].TrimStart(null); Gtotal = double.Parse(Salary); if (Depart == cDept && cStat == Stat && cSex == Sex) { this.xEmployeeListBox.Items.Add(recordIn); } recordIn = reader.ReadLine(); } reader.Close(); inFile.Close(); if (this.xEmployeeListBox.Items.Count >= 1) { this.xFileSaveMenuItem.Enabled = true; this.xFilePrintMenuItem.Enabled = true; this.xEditClearMenuItem.Enabled = true; } else { this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; MessageBox.Show("Records not found"); } } private void xEditClearMenuItem_Click(object sender, EventArgs e) { this.xEmployeeListBox.Items.Clear(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; } } } Source file -- Anderson, Kristen, Accounting, Assistant, Female, 43155 Ball, Robin, Accounting, Instructor, Female, 42723 Chin, Roger, Accounting, Full, Male,59281 Coats, William, Accounting, Assistant, Male, 45371 Doepke, Cheryl, Accounting, Full, Female, 52105 Downs, Clifton, Accounting, Associate, Male, 46887 Garafano, Karen, Finance, Associate, Female, 49000 Hill, Trevor, Management, Instructor, Male, 38590 Jackson, Carole, Accounting, Instructor, Female, 38781 Jacobson, Andrew, Management, Full, Male, 56281 Lewis, Karl, Management, Associate, Male, 48387 Mack, Kevin, Management, Assistant, Male, 45000 McKaye, Susan, Management, Instructor, Female, 43979 Nelsen, Beth, Finance, Full, Female, 52339 Nelson, Dale, Accounting, Full, Male, 54578 Palermo, Sheryl, Accounting, Associate, Female, 45617 Rais, Mary, Finance, Instructor, Female, 27000 Scheib, Earl, Management, Instructor, Male, 37389 Smith, Tom, Finance, Full, Male, 57167 Smythe, Janice, Management, Associate, Female, 46887 True, David, Accounting, Full, Male, 53181 Young, Jeff, Management, Assistant, Male, 43513

    Read the article

  • SQL SERVER – Mirroring Configured Without Domain – The server network address TCP://SQLServerName:50

    - by pinaldave
    Regular readers of my blog will be aware of my friend who called me few days ago with very a funny SQL Problem SQL SERVER – SSMS Query Command(s) completed successfully without ANY Results. This time, it did not take long before he called me up with another interesting problem, although the issue he was facing this time was not that interesting and also very specific to him, however, he insisted me to share with all of you. Let us understand his situation at first. My friend is preparing for DBA exam Exam 70-450: PRO: Designing, Optimizing and Maintaining a Database Server Infrastructure using Microsoft SQL Server 2008 and for the same, he was trying to set up replication on his local laptop. He had installed two different instances of SQL Server on his computer and every time when he started the mirroring, it failed with common error message. The server network address “TCP://SQLServer:5023? cannot be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418) Well, before he contacted me, he searched online and checked my article written on the error in mirroring. However, he tried all the four suggestions, but it did not solve his problem. He called me at a reasonable time of late evening (unlike last time, which was midnight!). I even tried all the seven different suggestions myself, as previously proposed in my article; however, none of them worked. While looking at closely at services, I noticed something very simple. He was running all the instances on ‘Network Services’. In fact, his computer was a stand-alone computer. There was no network at all. Also, there was no domain or any other advance network concepts implemented. I just changed services from ‘Network Services’ to ‘Local System’ as his SQL Server was running on his local system and there were no network services. This prompted to restart the services. As this was not the production server and his development machine, we restarted the services on the laptop (do not restart services on production server without proper planning). After changing the ‘services log on’ account to localsystem, when he attempted to reconfigure the mirroring it worked right away. As usually in production server, proper domains are configured and advance network concepts are implemented I had never faced this type of problem earlier. My friend insisted to post this solution to his situation, wherein there was no domain configured and setting up mirroring was throwing an error. According to him, this is bound to help people, like him, who are preparing for certification using single system. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Certifications, SQL Mirroring

    Read the article

  • Sending mail with Gmail Account using System.Net.Mail in ASP.NET

    - by Jalpesh P. Vadgama
    Any web application is in complete without mail functionality you should have to write send mail functionality. Like if there is shopping cart application for example then when a order created on the shopping cart you need to send an email to administrator of website for Order notification and for customer you need to send an email of receipt of order. So any web application is not complete without sending email. This post is also all about sending email. In post I will explain that how we can send emails from our Gmail Account without purchasing any smtp server etc. There are some limitations for sending email from Gmail Account. Please note following things. Gmail will have fixed number of quota for sending emails per day. So you can not send more then that emails for the day. Your from email address always will be your account email address which you are using for sending email. You can not send an email to unlimited numbers of people. Gmail ant spamming policy will restrict this. Gmail provide both Popup and SMTP settings both should be active in your account where you testing. You can enable that via clicking on setting link in gmail account and go to Forwarding and POP/Imap. So if you are using mail functionality for limited emails then Gmail is Best option. But if you are sending thousand of email daily then it will not be Good Idea. Here is the code for sending mail from Gmail Account. using System.Net.Mail; namespace Experiement { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender,System.EventArgs e) { MailMessage mailMessage = new MailMessage(new MailAddress("[email protected]") ,new MailAddress("[email protected]")); mailMessage.Subject = "Sending mail through gmail account"; mailMessage.IsBodyHtml = true; mailMessage.Body = "<B>Sending mail thorugh gmail from asp.net</B>"; System.Net.NetworkCredential networkCredentials = new System.Net.NetworkCredential("[email protected]", "yourpassword"); SmtpClient smtpClient = new SmtpClient(); smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = networkCredentials; smtpClient.Host = "smtp.gmail.com"; smtpClient.Port = 587; smtpClient.Send(mailMessage); Response.Write("Mail Successfully sent"); } } } That’s run this application and you will get like below in your account. Technorati Tags: Gmail,System.NET.Mail,ASP.NET

    Read the article

  • SQL Authority News – Download Microsoft SQL Server 2014 Feature Pack and Microsoft SQL Server Developer’s Edition

    - by Pinal Dave
    Yesterday I attended the SQL Server Community Launch in Bangalore and presented on Performing an effective Presentation. It was a fun presentation and people very well received it. No matter on what subject, I present, I always end up talking about SQL. Here are two of the questions I had received during the event. Q1) I want to install SQL Server on my development server, where can we get it for free or at an economical price (I do not have MSDN)? A1) If you are not going to use your server in a production environment, you can just get SQL Server Developer’s Edition and you can read more about it over here. Here is another favorite question which I keep on receiving it during the event. Q2) I already have SQL Server installed on my machine, what are different feature pack should I install and where can I get them from. A2) Just download and install Microsoft SQL Server 2014 Service Pack. Here is the link for downloading it. The Microsoft SQL Server 2014 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server. It includes tool and components for Microsoft SQL Server 2014 and add-on providers for Microsoft SQL Server 2014. Here is the list of component this product contains: Microsoft SQL Server Backup to Windows Azure Tool Microsoft SQL Server Cloud Adapter Microsoft Kerberos Configuration Manager for Microsoft SQL Server Microsoft SQL Server 2014 Semantic Language Statistics Microsoft SQL Server Data-Tier Application Framework Microsoft SQL Server 2014 Transact-SQL Language Service Microsoft Windows PowerShell Extensions for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Shared Management Objects Microsoft Command Line Utilities 11 for Microsoft SQL Server Microsoft ODBC Driver 11 for Microsoft SQL Server – Windows Microsoft JDBC Driver 4.0 for Microsoft SQL Server Microsoft Drivers 3.0 for PHP for Microsoft SQL Server Microsoft SQL Server 2014 Transact-SQL ScriptDom Microsoft SQL Server 2014 Transact-SQL Compiler Service Microsoft System CLR Types for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Remote Blob Store SQL RBS codeplex samples page SQL Server Remote Blob Store blogs Microsoft SQL Server Service Broker External Activator for Microsoft SQL Server 2014 Microsoft OData Source for Microsoft SQL Server 2014 Microsoft Balanced Data Distributor for Microsoft SQL Server 2014 Microsoft Change Data Capture Designer and Service for Oracle by Attunity for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Master Data Service Add-in for Microsoft Excel Microsoft SQL Server StreamInsight Microsoft Connector for SAP BW for Microsoft SQL Server 2014 Microsoft SQL Server Migration Assistant Microsoft SQL Server 2014 Upgrade Advisor Microsoft OLEDB Provider for DB2 v5.0 for Microsoft SQL Server 2014 Microsoft SQL Server 2014 PowerPivot for Microsoft SharePoint 2013 Microsoft SQL Server 2014 ADOMD.NET Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Analysis Management Objects Microsoft SQL Server Report Builder for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Reporting Services Add-in for Microsoft SharePoint Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Special Activities in the OTN Lounge

    - by Bob Rhubart
    What is the OTN Lounge? It's the place for Oracle OpenWorld and JavaOne attendees to hang out, get off your feet, rest up between sessions, recharge your laptop, tablet, or phone, connect with other community members, pick the brains of subject matter experts and community leaders, enjoy some refreshments (coffee and soft drinks in the morning, beer in the afternoon), and avoid the crowds by watching keynote presentations on a plasma screen. But in addition to general chillaxin' the OTN Lounge also hosts several special activities throughout the week… OTN Lounge Special Activities Sunday Oracle Social Network Developer Challenge Kick-off   (7:00pm - 8:30pm)Want to learn more about Oracle Social Network? Love working with APIs? Enter the Oracle Social Network Developer Challenge and build your dream integration with Oracle's secure, purposeful social network for business. Demonstrate your skills, work with the latest and greatest and compete for $500 in Amazon gift cards. Go to theappslab.com/osnregisterr Read and agree to the terms and rules. Register yourself with your name, corporate email address, and company. Watch your inbox for a confirmation email from Oracle Social Network. Start coding (individual or teams welcome) Show off your work to the judges in the OTN Lounge, Wednesday, 4:00pm - 6:00pm Monday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) Oracle Social Network Developer Challenge Office Hours (4:00pm - 8:00pm)Meet the people behind Oracle Social Network. Tuesday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Office Hours (4:30pm - 8:00pm) Oracle Database / Oracle Fusion Middleware Tweet Meet (4:30pm - 6:00pm) Free as in beer! Oracle Database and Oracle Fusion Middleware tweeters, gather in the OTN Lounge for refreshments and conversation with fellow tweeters and Oracle Database and Middleware experts. Wednesday (Lounge Hours: 8:00am - 6:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Judging (4:00pm - 6:00pm) ADF Oracle ADF / Oracle Fusion Middleware Meet-up (4:30pm - 5:30pm) Join other Oracle ADF and Oracle Fusion Middleware developers and meet the product managers and engineers behind Oracle ADF, ADF Mobile, and ADF Essentials. Did we mention free beer? Thursday (Lounge Hours: 8:00am - 2:00pm) RAC Attack (9:00am - 1:00pm) The OTN Lounge is located in the Howard St .tent, located by no small coincidence on Howard St. between 3rd and 4th, directly between Moscone North and Moscone South. An Oracle OpenWorld or JavaOne conference badge is required for access to the OTN Lounge.

    Read the article

  • jQuery Time Entry with Time Navigation Keys

    - by Rick Strahl
    So, how do you display time values in your Web applications? Displaying date AND time values in applications is lot less standardized than date display only. While date input has become fairly universal with various date picker controls available, time entry continues to be a bit of a non-standardized. In my own applications I tend to use the jQuery UI DatePicker control for date entries and it works well for that. Here's an example: The date entry portion is well defined and it makes perfect sense to have a calendar pop up so you can pick a date from a rich UI when necessary. However, time values are much less obvious when it comes to displaying a UI or even just making time entries more useful. There are a slew of time picker controls available but other than adding some visual glitz, they are not really making time entry any easier. Part of the reason for this is that time entry is usually pretty simple. Clicking on a dropdown of any sort and selecting a value from a long scrolling list tends to take more user interaction than just typing 5 characters (7 if am/pm is used). Keystrokes can make Time Entry easier Time entry maybe pretty simple, but I find that adding a few hotkeys to handle date navigation can make it much easier. Specifically it'd be nice to have keys to: Jump to the current time (Now) Increase/decrease minutes Increase/decrease hours The timeKeys jQuery PlugIn Some time ago I created a small plugin to handle this scenario. It's non-visual other than tooltip that pops up when you press ? to display the hotkeys that are available: Try it Online The keys loosely follow the ancient Quicken convention of using the first and last letters of what you're increasing decreasing (ie. H to decrease, R to increase hours and + and - for the base unit or minutes here). All navigation happens via the keystrokes shown above, so it's all non-visual, which I think is the most efficient way to deal with dates. To hook up the plug-in, start with the textbox:<input type="text" id="txtTime" name="txtTime" value="12:05 pm" title="press ? for time options" /> Note the title which might be useful to alert people using the field that additional functionality is available. To hook up the plugin code is as simple as:$("#txtTime").timeKeys(); You essentially tie the plugin to any text box control. OptionsThe syntax for timeKeys allows for an options map parameter:$(selector).timeKeys(options); Options are passed as a parameter map object which can have the following properties: timeFormatYou can pass in a format string that allows you to format the date. The default is "hh:mm t" which is US time format that shows a 12 hour clock with am/pm. Alternately you can pass in "HH:mm" which uses 24 hour time. HH, hh, mm and t are translated in the format string - you can arrange the format as you see fit. callbackYou can also specify a callback function that is called when the date value has been set. This allows you to either re-format the date or perform post processing (such as displaying highlight if it's after a certain hour for example). Here's another example that uses both options:$("#txtTime").timeKeys({ timeFormat: "HH:mm", callback: function (time) { showStatus("new time is: " + time.toString() + " " + $(this).val() ); } }); The plugin code itself is fairly simple. It hooks the keydown event and checks for the various keys that affect time navigation which is straight forward. The bulk of the code however deals with parsing the time value and formatting the output using a Time class that implements parsing, formatting and time navigation methods. Here's the code for the timeKeys jQuery plug-in:/// <reference path="jquery.js" /> /// <reference path="ww.jquery.js" /> (function ($) { $.fn.timeKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to time fields /// + Add minute - subtract minute /// H Subtract Hour R Add houR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// timeFormat: "hh:mm t" by default HH:mm alternate /// callback: callback handler after time assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { timeFormat: "hh:mm t", callback: null } $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var time = new Time($el.val()); //alert($(this).val() + " " + time.toString() + " " + time.date.toString()); switch (e.keyCode) { case 78: // [N]ow time = new Time(new Date()); break; case 109: case 189: // - time.addMinutes(-1); break; case 107: case 187: // + time.addMinutes(1); break; case 72: //H time.addHours(-1); break; case 82: //R time.addHours(1); break; case 191: // ? if (e.shiftKey) $(this).tooltip("<b>N</b> Now<br/><b>+</b> add minute<br /><b>-</b> subtract minute<br /><b>H</b> Subtract Hour<br /><b>R</b> add hour", 4000, { isHtml: true }); return false; default: return true; } $el.val(time.toString(opt.timeFormat)); if (opt.callback) { // call async and set context in this element setTimeout(function () { opt.callback.call($el.get(0), time) }, 1); } return false; }); } Time = function (time, format) { /// <summary> /// Time object that can parse and format /// a time values. /// </summary> /// <param name="time" type="object"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> /// <param name="format" type="string"> /// Time format string: /// HH:mm (23:01) /// hh:mm t (11:01 pm) /// </param> /// <example> /// var time = new Time( new Date()); /// time.addHours(5); /// time.addMinutes(10); /// var s = time.toString(); /// /// var time2 = new Time(s); // parse with constructor /// var t = time2.parse("10:15 pm"); // parse with .parse() method /// alert( t.hours + " " + t.mins + " " + t.ampm + " " + t.hours25) ///</example> var _I = this; this.date = new Date(); this.timeFormat = "hh:mm t"; if (format) this.timeFormat = format; this.parse = function (time) { /// <summary> /// Parses time value from a Date object, or string in format of: /// 12:12pm or 23:01 /// </summary> /// <param name="time" type="any"> /// A time value as a string (12:15pm or 23:01), a Date object /// or time value. /// /// </param> if (!time) return null; // Date if (time.getDate) { var t = {}; var d = time; t.hours24 = d.getHours(); t.mins = d.getMinutes(); t.ampm = "am"; if (t.hours24 > 11) { t.ampm = "pm"; if (t.hours24 > 12) t.hours = t.hours24 - 12; } time = t; } if (typeof (time) == "string") { var parts = time.split(":"); if (parts < 2) return null; var time = {}; time.hours = parts[0] * 1; time.hours24 = time.hours; time.mins = parts[1].toLowerCase(); if (time.mins.indexOf("am") > -1) { time.ampm = "am"; time.mins = time.mins.replace("am", ""); if (time.hours == 12) time.hours24 = 0; } else if (time.mins.indexOf("pm") > -1) { time.ampm = "pm"; time.mins = time.mins.replace("pm", ""); if (time.hours < 12) time.hours24 = time.hours + 12; } time.mins = time.mins * 1; } _I.date.setMinutes(time.mins); _I.date.setHours(time.hours24); return time; }; this.addMinutes = function (mins) { /// <summary> /// adds minutes to the internally stored time value. /// </summary> /// <param name="mins" type="number"> /// number of minutes to add to the date /// </param> _I.date.setMinutes(_I.date.getMinutes() + mins); } this.addHours = function (hours) { /// <summary> /// adds hours the internally stored time value. /// </summary> /// <param name="hours" type="number"> /// number of hours to add to the date /// </param> _I.date.setHours(_I.date.getHours() + hours); } this.getTime = function () { /// <summary> /// returns a time structure from the currently /// stored time value. /// Properties: hours, hours24, mins, ampm /// </summary> return new Time(new Date()); h } this.toString = function (format) { /// <summary> /// returns a short time string for the internal date /// formats: 12:12 pm or 23:12 /// </summary> /// <param name="format" type="string"> /// optional format string for date /// HH:mm, hh:mm t /// </param> if (!format) format = _I.timeFormat; var hours = _I.date.getHours(); if (format.indexOf("t") > -1) { if (hours > 11) format = format.replace("t", "pm") else format = format.replace("t", "am") } if (format.indexOf("HH") > -1) format = format.replace("HH", hours.toString().padL(2, "0")); if (format.indexOf("hh") > -1) { if (hours > 12) hours -= 12; if (hours == 0) hours = 12; format = format.replace("hh", hours.toString().padL(2, "0")); } if (format.indexOf("mm") > -1) format = format.replace("mm", _I.date.getMinutes().toString().padL(2, "0")); return format; } // construction if (time) this.time = this.parse(time); } String.prototype.padL = function (width, pad) { if (!width || width < 1) return this; if (!pad) pad = " "; var length = width - this.length if (length < 1) return this.substr(0, width); return (String.repeat(pad, length) + this).substr(0, width); } String.repeat = function (chr, count) { var str = ""; for (var x = 0; x < count; x++) { str += chr }; return str; } })(jQuery); The plugin consists of the actual plugin and the Time class which handles parsing and formatting of the time value via the .parse() and .toString() methods. Code like this always ends up taking up more effort than the actual logic unfortunately. There are libraries out there that can handle this like datejs or even ww.jquery.js (which is what I use) but to keep the code self contained for this post the plugin doesn't rely on external code. There's one optional exception: The code as is has one dependency on ww.jquery.js  for the tooltip plugin that provides the small popup for all the hotkeys available. You can replace that code with some other mechanism to display hotkeys or simply remove it since that behavior is optional. While we're at it: A jQuery dateKeys plugIn Although date entry tends to be much better served with drop down calendars to pick dates from, often it's also easier to pick dates using a few simple hotkeys. Navigation that uses + - for days and M and H for MontH navigation, Y and R for YeaR navigation are a quick way to enter dates without having to resort to using a mouse and clicking around to what you want to find. Note that this plugin does have a dependency on ww.jquery.js for the date formatting functionality.$.fn.dateKeys = function (options) { /// <summary> /// Attaches a set of hotkeys to date 'fields' /// + Add day - subtract day /// M Subtract Month H Add montH /// Y Subtract Year R Add yeaR /// ? Show keys /// </summary> /// <param name="options" type="object"> /// Options: /// dateFormat: "MM/dd/yyyy" by default "MMM dd, yyyy /// callback: callback handler after date assignment /// </param> /// <example> /// var proxy = new ServiceProxy("JsonStockService.svc/"); /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError); ///</example> if (this.length < 1) return this; var opt = { dateFormat: "MM/dd/yyyy", callback: null }; $.extend(opt, options); return this.keydown(function (e) { var $el = $(this); var d = new Date($el.val()); if (!d) d = new Date(1900, 0, 1, 1, 1); var month = d.getMonth(); var year = d.getFullYear(); var day = d.getDate(); switch (e.keyCode) { case 84: // [T]oday d = new Date(); break; case 109: case 189: d = new Date(year, month, day - 1); break; case 107: case 187: d = new Date(year, month, day + 1); break; case 77: //M d = new Date(year, month - 1, day); break; case 72: //H d = new Date(year, month + 1, day); break; case 191: // ? if (e.shiftKey) $el.tooltip("<b>T</b> Today<br/><b>+</b> add day<br /><b>-</b> subtract day<br /><b>M</b> subtract Month<br /><b>H</b> add montH<br/><b>Y</b> subtract Year<br/><b>R</b> add yeaR", 5000, { isHtml: true }); return false; default: return true; } $el.val(d.formatDate(opt.dateFormat)); if (opt.callback) // call async setTimeout(function () { opt.callback.call($el.get(0),d); }, 10); return false; }); } The logic for this plugin is similar to the timeKeys plugin, but it's a little simpler as it tries to directly parse the date value from a string via new Date(inputString). As mentioned it also uses a helper function from ww.jquery.js to format dates which removes the logic to perform date formatting manually which again reduces the size of the code. And the Key is… I've been using both of these plugins in combination with the jQuery UI datepicker for datetime values and I've found that I rarely actually pop up the date picker any more. It's just so much more efficient to use the hotkeys to navigate dates. It's still nice to have the picker around though - it provides the expected behavior for date entry. For time values however I can't justify the UI overhead of a picker that doesn't make it any easier to pick a time. Most people know how to type in a time value and if they want shortcuts keystrokes easily beat out any pop up UI. Hopefully you'll find this as useful as I have found it for my code. Resources Online Sample Download Sample Project © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Forbes Announcing The World’s Top 20 Billionaires

    - by Suganya
    Forbes company recently conducted a survey to figure out the world’s Billionaires list and has released it listing the top 20 names of the Billionaires. The company says that for the third time in the last three years the world has a new richest man for this year. So it means that Bill Gates was beaten up by someone else in world. Who is the new richest man in the world?   Forbes.Com announced the richest man in world and this time it is not Bill Gates. But it is Carlos Slim Helu who is into Telecom industry. Carlos lives in Mexico and he had the third richest man’s place last year. Having shown a Net worth of $ 53.5 Billion, Carlos has increased $18.5 Billion in a year. Carlos swooped on the privatization of Mexico’s national telephone service during the last decade and now has achieved the world’s first richest man. Following Carlos, in the second position is Bill Gates with the Nett worth of $53 Billion. As Bill Gates requires no great introduction, lets move on to the next place. The third place is occupied by Warren Buffett followed by Mukesh Ambani and Lakshmi Mittal in fourth and fifth places respectively. The top 20 names of world’s richest people, their occupation and the Nett worth that they hold are S.No Name Nett Worth (in $ Billion) Source of Income 1 Carlos Slim Helu 53.5 Telecom 2 Bill Gates 53 Microsoft 3 Warren Buffett 47 Investments 4 Mukesh Ambani 29 Petrochemical, Oil and Gas 5 Lakshmi Mittal 28.7 Steel 6 Lawrence Ellison 28 Oracle 7 Bernard Arnault 27.5 Luxury Goods 8 Eike Batista 27 Mining, Oil 9 Amancio Ortega 25 Fashion, Retail 10 Karl Albrecht 23.5 Supermarkets 11 Ingvar Kamprad and Family 23 IKEA 12 Christy Walton and Family 22.5 Wal-Mart 13 Stefan Persson 22.4 H & M 14 Li Ka-shing 21 Diversified 15 Jim C. Walton 20.7 Wal-Mart 16 Alice Walton 20.6 Wal-Mart 17 Liliane Bettencourt 20 L’Oreal 18 S. Robson Walton 19.8 Wal-Mart 19 Prince Alwaleed bin Talal Alsaud 19.4 Diversified 20 David Thomson and Family 19 Thomson Reuters   Source: Forbes and Image Credit : kevindooley Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why do apache2 upgrades remove and not re-install libapache2-mod-php5?

    - by nutznboltz
    We repeatedly see that when an apache2 update arrives and is installed it causes the libapache2-mod-php5 package to be removed and does not subsequently re-install it automatically. We must subsequently re-install the libapache2-mod-php5 manually in order to restore functionality to our web server. Please see the following github gist, it is a contiguous section of our server's dpkg.log showing the November 14, 2011 update to apache2: https://gist.github.com/1368361 it includes 2011-11-14 11:22:18 remove libapache2-mod-php5 5.3.2-1ubuntu4.10 5.3.2-1ubuntu4.10 Is this a known issue? Do other people see this too? I could not find any launchpad bug reports about it. Platform details: $ lsb_release -ds Ubuntu 10.04.3 LTS $ uname -srvm Linux 2.6.38-12-virtual #51~lucid1-Ubuntu SMP Thu Sep 29 20:27:50 UTC 2011 x86_64 $ dpkg -l | awk '/ii.*apache/ {print $2 " " $3 }' apache2 2.2.14-5ubuntu8.7 apache2-mpm-prefork 2.2.14-5ubuntu8.7 apache2-utils 2.2.14-5ubuntu8.7 apache2.2-bin 2.2.14-5ubuntu8.7 apache2.2-common 2.2.14-5ubuntu8.7 libapache2-mod-authnz-external 3.2.4-2+squeeze1build0.10.04.1 libapache2-mod-php5 5.3.2-1ubuntu4.10 Thanks At a high-level the update process looks like: package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end But at a lower level def install_package(name, version) run_command_with_systems_locale( :command = "apt-get -q -y#{expand_options(@new_resource.options)} install #{name}=#{version}", :environment = { "DEBIAN_FRONTEND" = "noninteractive" } ) end def upgrade_package(name, version) install_package(name, version) end So Chef is using "install" to do "update". This sort of moves the question around to "how does apt-get safe-upgrade" remember to re-install libapache-mod-php5? The exact sequence of packages that triggered this was: apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common But the code is attempting to run checks to make sure the packages in that list are installed already before attempting to "upgrade" them. case node[:platform] when 'debian', 'centos', 'fedora', 'redhat', 'scientific', 'ubuntu' # first primitive way is to define the updates in the recipe # data bags will be used later %w/ apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common /.each{ |package_name| Chef::Log.debug("is #{package_name} among local packages available for changes?") next unless node[:packages][:changes].keys.include?(package_name) Chef::Log.debug("is #{package_name} available for upgrade?") next unless node[:packages][:changes][package_name][:action] == 'upgrade' package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end tag('upgraded') } # after upgrading everything, run yum cache updater if tagged?('upgraded') # Remove old orphaned dependencies and kernel images and kernel headers etc. # Remove cached deb files. case node[:platform] when 'ubuntu' execute 'apt-get -y autoremove' execute 'apt-get clean' # Re-check what updates are available soon. when 'centos', 'fedora', 'redhat', 'scientific' node[:packages][:last_time_we_looked_at_yum] = 0 end untag('upgraded') end end But it's clear that it fails since the dpkg.log has 2011-11-14 11:22:25 install apache2-mpm-worker 2.2.14-5ubuntu8.7 on a system which does not currently have apache2-mpm-worker. I will have to discuss this with the author, thanks again.

    Read the article

  • The best, in the West

    - by Fatherjack
    As many of you know, I run the SQL South West user group and we are currently in full flow preparing to stage the UK’s second SQL Saturday. The SQL Saturday spotlight is going to fall on Exeter in March 2013. We have full-day session on Friday 8th with some truly amazing speakers giving their insights and experience into some vital areas of working with SQL Server: Dave Ballantyne and Dave Morrison – TSQL and internals Christian Bolton and Gavin Payne – Mission critical data platforms on Windows Server 2012 Denny Cherry – SQL Server Security André Kamman – Powershell 3.0 for SQL Server Administrators and Developers Mladen Prajdic – From SQL Traces to Extended Events – The next big switch. A number of people have claimed that the choice is too good and they’d have trouble selecting just one session to attend. I can see how this is a problem but hope that they make their minds up quickly. The venue is a bespoke conference suite in the centre of Exeter but has limited capacity so we are working on a first-come first-served basis. All the session details and booking and travel information can be found on our user group website. The Saturday will be a day of free, 50 minute sessions on all aspects SQL Server from almost 30 different speakers. If you would like to submit a session then get a move on as submissions close on 8th January 2013 (That’s less than a month away). We are really interested in getting new speakers started so we have a lightning talk session where you can come along and give a small talk (anywhere from 5 to 15 minutes long) about anything connected with SQL Server as a way to introduce you to what it’s like to be a speaker at an event. Details on registering to attend and to submit a session (Lightning talks need to be submitted too please) can be found on our SQL Saturday pages. This is going to be the biggest and best bespoke SQL Server conference to ever take place this far South West in the UK and we aim to give everyone who comes to either day a real experience of the South West so we have a few surprises for you on the day.

    Read the article

  • Is Your ASP.NET Development Server Not Working?

    - by Paulo Morgado
    Since Visual Studio 2005, Visual Studio comes with a development web server: the ASP.NET Development Server. I’ve been using this web server for simple test projects since than with Visual Studio 2005 and Visual Studio 2008 in Windows XP Professional on my work laptop and Windows XP Professional, Windows Vista 64bit Ultimate and Windows 7 64bit Ultimate at my home desktop without any problems (apart the known custom identity problem, that is). When I received my new work laptop, I installed Windows Vista 64bit Enterprise and Visual Studio 2008 and, for my surprise, the ASP.NET Development Server wasn’t working. I started looking for differences between the laptop environment and the desktop environment and the most notorious differences were: System Laptop Desktop SKU Windows Vista 64bit Enterprise Windows Vista 64bit Ultimate Joined to a Domain Yes No Anti-Virus McAffe ESET After asserting that no domain policies were being applied to my laptop and domain user and nothing was being logged by the ant-virus, my suspicions turned to the fact that the laptop was running an Enterprise SKU and the desktop was running an Ultimate SKU. After having problems with other applications I was sure that problem was the Enterprise SKU, but never found a solution to the problem. Because I wasn’t doing any web development at the time, I left it alone. After upgrading to Windows 7, the problem persisted but, because I wasn’t doing any web development at the time, once again, I left it alone. Now that I installed Visual Studio 2010 I had to solve this. After searching around forums and blogs that either didn’t offer an answer or offered very complicated workarounds that, sometimes, involved messing with the registry, I came to the conclusion that the solution is, in fact, very simple. When Windows Vista is installed, hosts file, according to this contains this definition: 127.0.0.1 localhost ::1 localhost This was not what I had on my laptop hosts file. What I had was this: #127.0.0.1 localhost #::1 localhost I might have changed it myself, but from the amount of people that I found complaining about this problem on Windows Vista, this was probably the way it was. The installation of Windows 7 leaves the hosts file like this: #127.0.0.1 localhost #::1 localhost And although the ASP.NET Development Server works fine on Windows 7 64bit Ultimate, on Windows 7 64bit Enterprise it needs to be change to this: 127.0.0.1 localhost ::1 localhost And I suspect it’s the same with Windows Vista 64bit Enterprise.

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >