Search Results

Search found 5183 results on 208 pages for 'programmer skills'.

Page 205/208 | < Previous Page | 201 202 203 204 205 206 207 208  | Next Page >

  • JQuery tablesorter - Keeping grouped subheaders together, but still sorted

    - by hfidgen
    Hiya, I'm not really a Javascript programmer, so I'm struggling with this! I'm using the tablesorter plugin along with the Tablegroup plugin, which work very nicely to group the table rows by a parent, and then sort the parents. My problem is though, that I'd also like the child rows to be sorted whilst within the parent group I've done my best to get this working but I'm afraid I've hit a wall. Can anyone suggest a new starter for 10? The example below is working fine - There are 2x groups here: Nordics (Norway and Denmark) DACH (Germany and Austria) If I click on the header row, the groups are sorted, but the child rows within the group are not sorted. <script type="text/javascript"> $(document).ready(function () { $(".tablesorter") .tablesorter({ // set default sort column sortList: [[0,0]], // don't sort by first column headers: {0: {sorter: false}} , onRenderHeader: function (){ this.wrapInner("<span></span>"); } , debug: true }) }); </script> <table id="results-header" class="grid tablesorter table-header" cellpadding="0" cellspacing="0" border="0"> <thead> <tr class="title"> <th class="countries">&nbsp;</th> <th>% market share</th> <th>% increase in mkt share</th> <th>Target achieved</th> <th>% targets</th> <th>% sales inc. M-o-M</th> <th>% sales inc. M-o-M for country</th> <th>% training</th> </tr> </thead> <tbody> <tr id="Nord" class="collapsible parent parent-even collapsed"> <td class="countries">Nordics</td> <td>39.5</td> <td>49</td> <td>69.8</td> <td>51.8</td> <td>43</td> <td>42.5</td> <td>38</td> </tr> <tr id="row-Norway" class="expand-child child child-Nord"> <td class="countries">Norway</td> <td>6</td> <td>45</td> <td>101</td> <td>10</td> <td>20</td> <td>40</td> <td>30</td> </tr> <tr id="row-Denmark" class="expand-child child child-Nord"> <td class="countries">Denmark</td> <td>10</td> <td>20</td> <td>3</td> <td>40</td> <td>50</td> <td>25</td> <td>8</td> </tr> <tr id="DACH" class="collapsible parent parent-odd collapsed"> <td class="countries">DACH</td> <td>77</td> <td>61</td> <td>43</td> <td>98</td> <td>65</td> <td>92.5</td> <td>59.5</td> </tr> <tr id="row-Germany" class="expand-child child child-DACH"> <td class="countries">Germany</td> <td>56</td> <td>24</td> <td>84</td> <td>98</td> <td>32</td> <td>87</td> <td>21</td> </tr> <tr id="row-Austria" class="expand-child child child-DACH"> <td class="countries">Austria</td> <td>98</td> <td>98</td> <td>2</td> <td>98</td> <td>98</td> <td>98</td> <td>98</td> </tr> </tbody> </table>

    Read the article

  • Should this immutable struct be a mutable class?

    - by ChaosPandion
    I showed this struct to a fellow programmer and they felt that it should be a mutable class. They felt it is inconvenient not to have null references and the ability to alter the object as required. I would really like to know if there are any other reasons to make this a mutable class. [Serializable] public struct PhoneNumber : ICloneable, IEquatable<PhoneNumber> { private const int AreaCodeShift = 54; private const int CentralOfficeCodeShift = 44; private const int SubscriberNumberShift = 30; private const int CentralOfficeCodeMask = 0x000003FF; private const int SubscriberNumberMask = 0x00003FFF; private const int ExtensionMask = 0x3FFFFFFF; private readonly ulong value; public int AreaCode { get { return UnmaskAreaCode(value); } } public int CentralOfficeCode { get { return UnmaskCentralOfficeCode(value); } } public int SubscriberNumber { get { return UnmaskSubscriberNumber(value); } } public int Extension { get { return UnmaskExtension(value); } } public PhoneNumber(ulong value) : this(UnmaskAreaCode(value), UnmaskCentralOfficeCode(value), UnmaskSubscriberNumber(value), UnmaskExtension(value), true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber) : this(areaCode, centralOfficeCode, subscriberNumber, 0, true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension) : this(areaCode, centralOfficeCode, subscriberNumber, extension, true) { } private PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension, bool throwException) { value = 0; if (areaCode < 200 || areaCode > 989) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The area code portion must fall between 200 and 989."); } else if (centralOfficeCode < 200 || centralOfficeCode > 999) { if (!throwException) return; throw new ArgumentOutOfRangeException("centralOfficeCode", centralOfficeCode, @"The central office code portion must fall between 200 and 999."); } else if (subscriberNumber < 0 || subscriberNumber > 9999) { if (!throwException) return; throw new ArgumentOutOfRangeException("subscriberNumber", subscriberNumber, @"The subscriber number portion must fall between 0 and 9999."); } else if (extension < 0 || extension > 1073741824) { if (!throwException) return; throw new ArgumentOutOfRangeException("extension", extension, @"The extension portion must fall between 0 and 1073741824."); } else if (areaCode.ToString()[1] - 48 > 8) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The second digit of the area code cannot be greater than 8."); } else { value |= ((ulong)(uint)areaCode << AreaCodeShift); value |= ((ulong)(uint)centralOfficeCode << CentralOfficeCodeShift); value |= ((ulong)(uint)subscriberNumber << SubscriberNumberShift); value |= ((ulong)(uint)extension); } } public object Clone() { return this; } public override bool Equals(object obj) { return obj != null && obj.GetType() == typeof(PhoneNumber) && Equals((PhoneNumber)obj); } public bool Equals(PhoneNumber other) { return this.value == other.value; } public override int GetHashCode() { return value.GetHashCode(); } public override string ToString() { return ToString(PhoneNumberFormat.Separated); } public string ToString(PhoneNumberFormat format) { switch (format) { case PhoneNumberFormat.Plain: return string.Format(@"{0:D3}{1:D3}{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); case PhoneNumberFormat.Separated: return string.Format(@"{0:D3}-{1:D3}-{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); default: throw new ArgumentOutOfRangeException("format"); } } public ulong ToUInt64() { return value; } public static PhoneNumber Parse(string value) { var result = default(PhoneNumber); if (!TryParse(value, out result)) { throw new FormatException(string.Format(@"The string ""{0}"" could not be parsed as a phone number.", value)); } return result; } public static bool TryParse(string value, out PhoneNumber result) { result = default(PhoneNumber); if (string.IsNullOrEmpty(value)) { return false; } var index = 0; var numericPieces = new char[value.Length]; foreach (var c in value) { if (char.IsNumber(c)) { numericPieces[index++] = c; } } if (index < 9) { return false; } var numericString = new string(numericPieces); var areaCode = int.Parse(numericString.Substring(0, 3)); var centralOfficeCode = int.Parse(numericString.Substring(3, 3)); var subscriberNumber = int.Parse(numericString.Substring(6, 4)); var extension = 0; if (numericString.Length > 10) { extension = int.Parse(numericString.Substring(10)); } result = new PhoneNumber( areaCode, centralOfficeCode, subscriberNumber, extension, false ); return result.value == 0; } public static bool operator ==(PhoneNumber left, PhoneNumber right) { return left.Equals(right); } public static bool operator !=(PhoneNumber left, PhoneNumber right) { return !left.Equals(right); } private static int UnmaskAreaCode(ulong value) { return (int)(value >> AreaCodeShift); } private static int UnmaskCentralOfficeCode(ulong value) { return (int)((value >> CentralOfficeCodeShift) & CentralOfficeCodeMask); } private static int UnmaskSubscriberNumber(ulong value) { return (int)((value >> SubscriberNumberShift) & SubscriberNumberMask); } private static int UnmaskExtension(ulong value) { return (int)(value & ExtensionMask); } } public enum PhoneNumberFormat { Plain, Separated }

    Read the article

  • "Randomly" occurring errors...

    - by ClarkeyBoy
    Hi, My website has a setup whereby when the application starts a module called SiteContent is "created". This runs a clearup function which basically deletes any irrelevant data from the database, in case any has been left in there from previously run functions. The module has instances of Manager classes - namely RangeManager, CollectionManager, DesignManager. There are others but I will just use these as an example. Each Manager class contains an array of items - items may be of type Range, Collection or Design, whichever one is relevant. Data for each range is then read into an instance of Range, Collection or Design. I know this is basically duplicating data - not very efficient but its my final year project at the moment so I can always change it to use Linq or something similar later, when I am not pressured by the one month deadline. I have a form which, on clicking the Save button, saves data by calling SiteContent.RangeManager.Create(vars) or SiteContent.RangeManager.Update(Range As Range, vars) (or the equivalent for other manager classes, whichever one happens to be relevant). These functions call a stored procedure to insert or update in the relevant table. Classes Range, Collection and Design all have attributes such as Name, Description, Display and several others. When the Create or Update function is called, the Manager loops through all the other items to check if an item with the same name already exists. The Update function ensures that it does not compare the item being updated to itself. A custom exception (ItemAlreadyExistsException) is thrown if another item with the same name is found. For some weird reason, if I go into a Range, Collection or Design in edit mode, change something and try to update it, it occasionally doesnt update the item. When I say occasionally I mean every 3 - 4 page loads, sometimes more. I see absolutely no pattern in when or why it occurs. I have a try-catch statement which catches ItemAlreadyExistsException, and outputs "An item with this name already exists" when caught. Occasionally it will output this; other times it will not. Does anyone have any idea why this could happen? Maybe a mistake which someone has made and solved before? I used to have regular expressions in place that the names were compared to - I believe it was [a-zA-Z]{1, 100} (between 1 and 100 lower- or upper-case characters). For some reason the customer who I am developing the site for used to get errors saying its not in the correct format. Yet he could try the same text 5 minutes later and it would work fine. I am thinking this could well be the same problem, since both problems occur at random. Many thanks in advance. Regards, Richard Clarke Edit: After much time spent narrowing down the code, I have decided to wait till my brother, who has been a programmer for at least 8 years more than I have, to come down over Easter and get him to have a look at it. If he cant solve it then I will zip the files up and put them somewhere for people to access and have a go at. I narrowed it down literally to the minimum number of files possible, and it still occurs. It seems to be about every 10th time. Having said that, I force the manager classes to refresh every 10 page loads or 5 minutes (whichever one is sooner). I may look into this - this could be causing a problem. Basically each Manager contains an array of an object. This array is populated using data from the database. The Update function takes an instance of the item and the new values to be set for the object. If it happens to be a page load where the array is reset (ie the data is loaded freshly from the database) then the object instance with the same ID wont be the same instance as the one being passed in. This explains the fact that it throws an ItemAlreadyExistsException now and then. It all makes sense now the more I think about it. If I were to pass in the ID of the object to be altered, rather than the object itself, then it should work perfectly. I will answer the question if I solve it..

    Read the article

  • What's the best name for a non-mutating "add" method on an immutable collection?

    - by Jon Skeet
    Sorry for the waffly title - if I could come up with a concise title, I wouldn't have to ask the question. Suppose I have an immutable list type. It has an operation Foo(x) which returns a new immutable list with the specified argument as an extra element at the end. So to build up a list of strings with values "Hello", "immutable", "world" you could write: var empty = new ImmutableList<string>(); var list1 = empty.Foo("Hello"); var list2 = list1.Foo("immutable"); var list3 = list2.Foo("word"); (This is C# code, and I'm most interested in a C# suggestion if you feel the language is important. It's not fundamentally a language question, but the idioms of the language may be important.) The important thing is that the existing lists are not altered by Foo - so empty.Count would still return 0. Another (more idiomatic) way of getting to the end result would be: var list = new ImmutableList<string>().Foo("Hello"); .Foo("immutable"); .Foo("word"); My question is: what's the best name for Foo? EDIT 3: As I reveal later on, the name of the type might not actually be ImmutableList<T>, which makes the position clear. Imagine instead that it's TestSuite and that it's immutable because the whole of the framework it's a part of is immutable... (End of edit 3) Options I've come up with so far: Add: common in .NET, but implies mutation of the original list Cons: I believe this is the normal name in functional languages, but meaningless to those without experience in such languages Plus: my favourite so far, it doesn't imply mutation to me. Apparently this is also used in Haskell but with slightly different expectations (a Haskell programmer might expect it to add two lists together rather than adding a single value to the other list). With: consistent with some other immutable conventions, but doesn't have quite the same "additionness" to it IMO. And: not very descriptive. Operator overload for + : I really don't like this much; I generally think operators should only be applied to lower level types. I'm willing to be persuaded though! The criteria I'm using for choosing are: Gives the correct impression of the result of the method call (i.e. that it's the original list with an extra element) Makes it as clear as possible that it doesn't mutate the existing list Sounds reasonable when chained together as in the second example above Please ask for more details if I'm not making myself clear enough... EDIT 1: Here's my reasoning for preferring Plus to Add. Consider these two lines of code: list.Add(foo); list.Plus(foo); In my view (and this is a personal thing) the latter is clearly buggy - it's like writing "x + 5;" as a statement on its own. The first line looks like it's okay, until you remember that it's immutable. In fact, the way that the plus operator on its own doesn't mutate its operands is another reason why Plus is my favourite. Without the slight ickiness of operator overloading, it still gives the same connotations, which include (for me) not mutating the operands (or method target in this case). EDIT 2: Reasons for not liking Add. Various answers are effectively: "Go with Add. That's what DateTime does, and String has Replace methods etc which don't make the immutability obvious." I agree - there's precedence here. However, I've seen plenty of people call DateTime.Add or String.Replace and expect mutation. There are loads of newsgroup questions (and probably SO ones if I dig around) which are answered by "You're ignoring the return value of String.Replace; strings are immutable, a new string gets returned." Now, I should reveal a subtlety to the question - the type might not actually be an immutable list, but a different immutable type. In particular, I'm working on a benchmarking framework where you add tests to a suite, and that creates a new suite. It might be obvious that: var list = new ImmutableList<string>(); list.Add("foo"); isn't going to accomplish anything, but it becomes a lot murkier when you change it to: var suite = new TestSuite<string, int>(); suite.Add(x => x.Length); That looks like it should be okay. Whereas this, to me, makes the mistake clearer: var suite = new TestSuite<string, int>(); suite.Plus(x => x.Length); That's just begging to be: var suite = new TestSuite<string, int>().Plus(x => x.Length); Ideally, I would like my users not to have to be told that the test suite is immutable. I want them to fall into the pit of success. This may not be possible, but I'd like to try. I apologise for over-simplifying the original question by talking only about an immutable list type. Not all collections are quite as self-descriptive as ImmutableList<T> :)

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • I never really understood: what is CGI?

    - by claws
    CGI is a Comman Gateway Interface. As the name says, it is a "common" gateway interface for everything. It is so trivial and naive from the name. I feel that I understood this and I felt this every time I encountered this word. But frankly, I didn't. I'm still confused. I am a PHP programmer. I did lot of web development. user (client) request for page --- webserver(-embedded PHP interpreter) ---- Server side(PHP) Script --- MySQL Server. Now say my PHP Script can fetch results from MySQL Server && MATLAB Server && Some other server. So, now PHP Script is the CGI? because its interface for the between webserver & All other servers? I don't know. Sometimes they call CGI, a technology & othertimes they call CGI a program or someother server. What exactly is CGI? Whats the big deal with /cgi-bin/*.cgi? Whats up with this? I don't know what is this cgi-bin directory on the server for. I don't know why they have *.cgi extensions. Why does Perl always comes in the way. CGI & Perl (language). I also don't know whats up with these two. Almost all the time I keep hearing these two in combination "CGI & Perl". This book is another great example CGI Programming with Perl Why not "CGI Programming with PHP/JSP/ASP". I never saw such things. CGI Programming in C this confuses me a lot. in C?? Seriously?? I don't know what to say. I"m just confused. "in C"?? This changes everything. Program needs to be compiled and executed. This entirely changes my view of web programming. When do I compile? How does the program gets executed (because it will be a machine code, so it must execute as a independent process). How does it communicate with the web server? IPC? and interfacing with all the servers (in my example MATLAB & MySQL) using socket programming? I'm lost!! They say that CGI is depreciated. Its no more in use. Is it so? What is its latest update? Once, I ran into a situation where I had to give HTTP PUT request access to web server (Apache HTTPD). Its a long back. So, as far as I remember this is what I did: Edited the configuration file of Apache HTTPD to tell webserver to pass all HTTP PUT requests to some put.php ( I had to write this PHP script) Implement put.php to handle the request (save the file to the location mentioned) People said that I wrote a CGI Script. Seriously, I didn't have clue what they were talking about. Did I really write CGI Script? I hope you understood what my confusion is. (Because I myself don't know where I'm confused). I request you guys to keep your answer as simple as possible. I really can't understand any fancy technical terminology. At least not in this case. EDIT: I found this amazing tutorial "CGI Programming Is Simple!" - CGI Tutorial Which explains the concepts in simplest possible way. I've only have one complaint about this tutorial. Just to make what ever he explained complete he should have shown the C code he used for generating response for those GET / POST requests. I've also added link to this tutorial to Wikipedia's article : http://en.wikipedia.org/wiki/Common_Gateway_Interface

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • PHP Email Form Sending Random Text

    - by Doug
    Hi, I did a webpage for a client that involved a series of text boxes asking for specific information such as a person's name, e-mail address, company, etc. Along with a button that would e-mail the information to my client. Whenever I tested the button it seemed to work perfectly, I uploaded the page and thought I was done. But, the other day my client got this email from the site: Name: rfhopzdgmx rfhopzdgmx Email: [email protected] Company: zUDXatAfoDvQrdH Mailing Address: AaSsXklqpHIsoCNcei gXsimMPRBYZqq vGLvZraZNdpOAV, ChsmuibE PoKzaSCubXPRI Home Phone: CIJbIfjMfjIaTqAlD Work Phone: JFLZBOvru Cell Phone: XlFJTTFGiTTiiFQfy Fax: UEJMOVZodWPkKxew Comments: sPvSCE hgetwoguderu,* [url=http://atyktjlxcznl.com/]atyktjlxcznl[/url], [link=http://nudvfcehwpyg.com/]nudvfcehwpyg[/link], http://lvvwkbzbhnzp.com/ Note: The * line contained HTML link code, I just don't know how to get this site to show it. Here is the PHP code in the site for the e-mail button. <?php //This Sends A Formatted Text Email Using The Text Boxes if ($_POST['submit']){ //This Gets The Form Data $fname = $_POST['fName']; $lname = $_POST['lName']; $email = $_POST['email']; $company = $_POST['co']; $address1 = $_POST['address1']; $address2 = $_POST['address2']; $city = $_POST['city']; $state = $_POST['state']; $zip = $_POST['zip']; $homep = $_POST['homeP']; $workp = $_POST['workP']; $cellp = $_POST['cellP']; $fax = $_POST['fax']; $comments = $_POST['txaOutputField']; //echo "<script language = 'javascript'>alert('YAY');</script>"; if ($fname && $lname && $email && $comments){ //Check If Required Fields Are Filled //This Sets The SMTP Configuration In php.ini ini_set("SMTP", "smtp.2ndsourcewire.com"); //This Replaces Any Blank Fields With 'None's if ($company == ""){ $company = "None"; } if ($address1 == ""){ $address1 = "None"; } if ($city == ""){ $city = "None"; } if ($state == ""){ $state = "None"; } if ($zip == ""){ $zip = "None"; } if ($homep == ""){ $homep = "None"; } if ($workp == ""){ $workp = "None"; } if ($cellp == ""){ $cellp = "None"; } if ($fax == ""){ $fax = "None"; } //This Creates The Variables Necessary For The Email $to = "CLIENT EMAIL WHICH I'M CENSORING"; $subject = "Email from 2ndSourceWire.com"; $from = "From: [email protected]"; $secondEmail = "MY EMAIL WHICH I'M ALSO CENSORING"; if ($address2 == ""){ $body = "Name: $fname $lname\n". "Email: $email\n". "Company: $company\n\n". "Mailing Address:\n". "$address1\n". "$city, $state $zip\n\n". "Home Phone: $homep\n". "Work Phone: $workp\n". "Cell Phone: $cellp\n". "Fax: $fax\n\n". "Comments:\n". "$comments"; } else { $body = "Name: $fname $lname\n". "Email: $email\n". "Company: $company\n\n". "Mailing Address:\n". "$address1\n". "$address2\n". "$city, $state $zip\n\n". "Home Phone: $homep\n". "Work Phone: $workp\n". "Cell Phone: $cellp\n". "Fax: $fax\n\n". "Comments:\n". "$comments"; } //This Sends The Email mail($to, $subject, $body, $from); mail($secondEmail, $subject, $body, $from); echo "<script language = 'javascript'>alert('The email was sent successfully.');</script>"; } else { //The Required Fields Are Not Filled echo "<script language = 'javascript'>alert('Please fill your first name, last name, email address, and your comment or question.');</script>"; } } ? I'm a little dumbfounded on how this happened, the client mentioned a couple e-mails of this, so I don't think it is a random glitch. Also, the e-mail address was formatted like an e-mail address, so someone or some program was interpreting the labels next to each text box. I also noticed that the first and last names entered are the same word, even though they were in different text boxes, I'm thinking its some spam program, but wouldn't they try to advertise something and make money, rather than just spouting out random text? Also, the comments section makes no sense to me at all, the links goto nowhere and they're all perfectly formatted, a random person just screwing around wouldn't know those tags, and a programmer doing it wouldn't bother with it, but also neither would a program. I have no idea what caused this or how to fix it, I'm drawing a blank here. Anyone have any ideas?

    Read the article

  • how to link with static mySQL C library with Visual Studio 2008?

    - by Jean-Denis Muys
    Hi, My project is running fine, but its requirement for some DLLs means it cannot be simply dragged and dropped by the end user. The DLLs are not loaded when put side by side with my executable, because my executable is not an application, and its location is not in the few locations where Windows looks for DLL. I already asked a question about how to make their loading happen. None of the suggestions worked (see the question at http://stackoverflow.com/questions/2637499/how-can-a-win32-app-plugin-load-its-dll-in-its-own-directory) So I am now exploring another way: get rid of the DLLs altogether, and link with static versions of them. This is failing for the last of those DLLs. So I am at this point where all but one of the libraries are statically linked, and everything is fine. The last library is the standard C library for mySQL, aka Connector/C. The problem I have may or may not be related with that origin. Whenever I switched to the static library in the linker additional dependency, I get the following errors (log at the end): 1- about 40 duplicate symbols (e.g. _toupper) mutually between LIBCMT.lib and MSVCRT.lib. Interestingly, I can't control the inclusion of these two libraries: they are from Visual Studio and automatically included. So why are these symbol duplicate when I include mySQL's static lib, but not its DLL? Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\OLDNAMES.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\msvcprt.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\LIBCMT.lib: LIBCMT.lib(setlocal.obj) : error LNK2005: _setlocale already defined in MSVCRT.lib(MSVCR90.dll) Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: MSVCRT.lib(MSVCR90.dll) : error LNK2005: _toupper already defined in LIBCMT.lib(toupper.obj) 2- two warnings that MSVCRT and LIBCMT conflicts with use of other libs, with a suggestion to use /NODEFAULTLIB:library:. I don't understand that suggestion: what am I supposed to do and how? LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library 3- an external symbol is undefined: _main. So does that mean that the static mySQL lib (but not the DLL) references a _main symbol? For the sake of it, I tried to define an empty function named _main() in my code, with no difference. LIBCMT.lib(crt0.obj) : error LNK2001: unresolved external symbol _main As mentioned in my first question, my code is a port of a fully working Mac version of the code. Its a plugin for a host application that I don't control. The port currently works, albeit with installation issues due to that lone remaining DLL. As a Mac programmer I am rather disoriented with Visual Studio and Windows which I find confusing, poorly designed and documented, with error messages that are very difficult to grasp and act upon. So I will be very grateful for any help. Here is the full set of errors: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\OLDNAMES.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\msvcprt.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\LIBCMT.lib: 1LIBCMT.lib(setlocal.obj) : error LNK2005: _setlocale already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __encode_pointer already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __encoded_null already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __decode_pointer already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tolower.obj) : error LNK2005: _tolower already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(invarg.obj) : error LNK2005: __set_invalid_parameter_handler already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(invarg.obj) : error LNK2005: __invalid_parameter_noinfo already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: _exit already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crtheap.obj) : error LNK2005: __malloc_crt already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(dosmap.obj) : error LNK2005: __errno already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(file.obj) : error LNK2005: __iob_func already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(mlock.obj) : error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(mlock.obj) : error LNK2005: _lock already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(winxfltr.obj) : error LNK2005: __CppXcptFilter already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(hooks.obj) : error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(winsig.obj) : error LNK2005: _signal already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(fflush.obj) : error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tzset.obj) : error LNK2005: __tzset already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(_ctype.obj) : error LNK2005: _isspace already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(_ctype.obj) : error LNK2005: _iscntrl already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(getenv.obj) : error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(strnicmp.obj) : error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(osfinfo.obj) : error LNK2005: __get_osfhandle already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(osfinfo.obj) : error LNK2005: __open_osfhandle already defined in MSVCRT.lib(MSVCR90.dll) [...] 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _toupper already defined in LIBCMT.lib(toupper.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isalpha already defined in LIBCMT.lib(_ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _wcschr already defined in LIBCMT.lib(wcschr.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isdigit already defined in LIBCMT.lib(_ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _islower already defined in LIBCMT.lib(ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: __doserrno already defined in LIBCMT.lib(dosmap.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _strftime already defined in LIBCMT.lib(strftime.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isupper already defined in LIBCMT.lib(_ctype.obj) [...] 1Finished searching libraries 1 Creating library z:\PCdev\Test\RK_Demo_2004\plugins\Test.bundle\contents\windows\Test.lib and object z:\PCdev\Test\RK_Demo_2004\plugins\Test.bundle\contents\windows\Test.exp 1Searching libraries [...] 1Finished searching libraries 1LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library 1LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library 1LIBCMT.lib(crt0.obj) : error LNK2001: unresolved external symbol _main

    Read the article

  • Printing an array in a method, from a different class?

    - by O.Lodhi
    Hello All, I'm a fairly inexperienced programmer, and i'm currently working on a Console Application project. It's basically a little 'mathematics game'; the application generates two random numbers, that have either been added, subtracted, multiplied or divided against each other randomly. The answer is shown on screen and the user has to pick from the menu which is the right mathematical operator, once the correct answer is picked the application then displays on screen how long it took for the user in milliseconds to input the correct answer. Now I want to save the times of the players in an array that can be called up later with all the scores. I need to include a method in this programme and I figured a method to save the times into an array would be suitable. I seem to have stumbled across a little problem though. I'm not quite sure what's wrong: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Mathgame { class Program { } class arrayclass { public static void saveInArray(int duration) { int[] TopTenScores = {000,1000,2000,3000,4000,5000,6000,7000,8000,9000}; if (duration < 1000) { duration = TopTenScores[000]; } else if ((duration >= 1000) && (duration <= 1999)) { duration = TopTenScores[1000]; } else if ((duration >= 2000) && (duration <= 2999)) { duration = TopTenScores[2000]; } else if ((duration >= 3000) && (duration <= 3999)) { duration = TopTenScores[3000]; } else if ((duration >= 4000) && (duration <= 4999)) { duration = TopTenScores[4000]; } else if ((duration >= 5000) && (duration <= 5999)) { duration = TopTenScores[5000]; } else if ((duration >= 6000) && (duration <= 6999)) { duration = TopTenScores[6000]; } else if ((duration >= 7000) && (duration <= 7999)) { duration = TopTenScores[7000]; } else if ((duration >= 8000) && (duration <= 8999)) { duration = TopTenScores[8000]; } else if ((duration >= 9000) && (duration <= 9999)) { duration = TopTenScores[9000]; } Console.WriteLine(TopTenScores); } static void Main(string[] args) { int intInput, num1, num2, incorrect, array1; float answer; string input; System.Random randNum = new System.Random(); Console.WriteLine("Welcome to the Maths game!"); Console.WriteLine("(Apologies for the glitchiness!)"); Console.WriteLine(); Console.WriteLine("Please choose from the following options:"); Console.WriteLine(); retry: Console.WriteLine("1 - Test your Maths against the clock!"); Console.WriteLine("2 - Exit the application."); Console.WriteLine("3 - Top scores"); Console.WriteLine(); input = Console.ReadLine(); intInput = int.Parse(input); if (intInput == 1) { goto start; } else if (intInput == 2) { goto fin; } else if (intInput == 3) { array1 = array1.saveInArray; goto retry; } Now, in the last 'else if' statement in the code, you can see my variable array1 trying to call the method, but no matter what I do I keep getting errors. This is the only error I have at the moment, but I have a feeling soon as I resolve that error, another will come up. For now i'm just determined to get past this error: 'int' does not contain a definition for 'saveInArray' and no extension method 'saveInArray' accepting a first argument of type 'int' could be found (are you missing a using directive or an assembly reference?). Any help would be kindly appreciated, apologies in advanced for my ugly written code! And thank you to any help that I receive! Regards, Omar.

    Read the article

  • Does there exist an "idea checkout system" on the Internet?

    - by TimeSpace Traveller
    Greetings. I would like to ask the following question: is there anything on the Internet like an "idea checkout" system? Situation: I'm a software developer. Since my current job has started 2 years ago, my mentor at that time has pointed me to the open source world. I have only put little time to look at some of the open source projects, let alone any contribution. However, it is my wish to start developing something outside of the work. Well, except a little problem. I don't know what to develop! It is not about the technical knowledge; the problem is that, I am not a creative person. I am very good at analytical thinking, as well as debugging skills. When being told by my work partners to develop a solution, I could get it done without a problem. However, outside of work, I have no idea what to develop. When I look at the Internet, it seems that so many people have already been developing on so many interesting stuff, making me wonder what I could develop, so that I would not reinvent something already existed. That starts to make me wonder. On the Internet, is there anything like an "idea checkout" system or society? For example, some people would throw in as a software idea, and the system would keep it as an "inventory"; later, a potential software developer would "check out" the idea, just a how people would check out a book from the library. Then, the developer would check the "idea" back in, with a certain kind of work-in-progress or developed software, thus becoming an open-source project. I have just noticed that here at stackoverflow, there is a "Project-Ideas" tag, so perhaps that can provide me some ideas on what to develop; still, my wonder is about a system that people would provide ideas, and people would check out ideas to develop / implement into actual solution. Is there such a system or society existing anywhere on the Internet? Any input is welcome! Thank you very much. Update: Thank you for everyone who has answered my question. Certainly, "getting idea" is part of my problem; as a software developer, however, I'm concerned more than just "getting idea". What I am concerned more, as I have commented, is about the existence of such an idea exchanging ecosystem, capable to initiate open-source projects. I'll put an example here. Say, person A has an idea of music search program, but not search by the attributes of the music (composer, singer, publisher, lyrics, etc.); instead, he wants a program (and a database) to search a piece of music by melody. Very often, people only remember a piece of music by its melody, not even the name of the music (e.g. the music he wants was only once heard in a bookstore, but the melody just gets stuck in his head!). In order to find that piece, normally he would just need to blindly search for it, and spent a long time to do so. A search by melody would enable person A to find the piece much quicker. However, he would not want to personally work on it, not just because of the complexity (he is not a musician and/or programmer, knowing almost nothing about music systems in computer, search algorithms, etc.), but also legal issues (RIAA??), thus he would just like to keep the idea at some place, and let other people to work on that. Now, a developer (person B) may be at the same stage as I am right now, wishing to find something to develop, but not having an idea. With the idea exchanging ecosystem, person B will search, and somehow discover person A's music search idea, and feeling interested enough to work on it. So he "checks out" the idea, start working on it (at least a skeleton), and checks back in with the progress. An open-source project starts from here, fulfilling person A's wish, and person B's programming desire. The above is just an example, because there are already such systems exist on the Internet, but it illustrates what I think about the idea exchange system in my mind. My main concern is about idea exchanging ecosystem, not at personal and unorganized level, but at a semi-organized protocol that's specifically for software developers, having actual projects coming out as the fruits. Not about "projects", but about "ideas and product of ideas". Hopefully that would clear up some of the original idea of this question. Any input is welcome; in fact, I would like to hear as many people as possible how everyone thinks about this. Thank you very much!

    Read the article

  • Making swap faster, easier to use and exception-safe

    - by FredOverflow
    I could not sleep last night and started thinking about std::swap. Here is the familiar C++98 version: template <typename T> void swap(T& a, T& b) { T c(a); a = b; b = c; } If a user-defined class Foo uses external ressources, this is inefficient. The common idiom is to provide a method void Foo::swap(Foo& other) and a specialization of std::swap<Foo>. Note that this does not work with class templates since you cannot partially specialize a function template, and overloading names in the std namespace is illegal. The solution is to write a template function in one's own namespace and rely on argument dependent lookup to find it. This depends critically on the client to follow the "using std::swap idiom" instead of calling std::swap directly. Very brittle. In C++0x, if Foo has a user-defined move constructor and a move assignment operator, providing a custom swap method and a std::swap<Foo> specialization has little to no performance benefit, because the C++0x version of std::swap uses efficient moves instead of copies: #include <utility> template <typename T> void swap(T& a, T& b) { T c(std::move(a)); a = std::move(b); b = std::move(c); } Not having to fiddle with swap anymore already takes a lot of burden away from the programmer. Current compilers do not generate move constructors and move assignment operators automatically yet, but as far as I know, this will change. The only problem left then is exception-safety, because in general, move operations are allowed to throw, and this opens up a whole can of worms. The question "What exactly is the state of a moved-from object?" complicates things further. Then I was thinking, what exactly are the semantics of std::swap in C++0x if everything goes fine? What is the state of the objects before and after the swap? Typically, swapping via move operations does not touch external resources, only the "flat" object representations themselves. So why not simply write a swap template that does exactly that: swap the object representations? #include <cstring> template <typename T> void swap(T& a, T& b) { unsigned char c[sizeof(T)]; memcpy( c, &a, sizeof(T)); memcpy(&a, &b, sizeof(T)); memcpy(&b, c, sizeof(T)); } This is as efficient as it gets: it simply blasts through raw memory. It does not require any intervention from the user: no special swap methods or move operations have to be defined. This means that it even works in C++98 (which does not have rvalue references, mind you). But even more importantly, we can now forget about the exception-safety issues, because memcpy never throws. I can see two potential problems with this approach: First, not all objects are meant to be swapped. If a class designer hides the copy constructor or the copy assignment operator, trying to swap objects of the class should fail at compile-time. We can simply introduce some dead code that checks whether copying and assignment are legal on the type: template <typename T> void swap(T& a, T& b) { if (false) // dead code, never executed { T c(a); // copy-constructible? a = b; // assignable? } unsigned char c[sizeof(T)]; std::memcpy( c, &a, sizeof(T)); std::memcpy(&a, &b, sizeof(T)); std::memcpy(&b, c, sizeof(T)); } Any decent compiler can trivially get rid of the dead code. (There are probably better ways to check the "swap conformance", but that is not the point. What matters is that it's possible). Second, some types might perform "unusual" actions in the copy constructor and copy assignment operator. For example, they might notify observers of their change. I deem this a minor issue, because such kinds of objects probably should not have provided copy operations in the first place. Please let me know what you think of this approach to swapping. Would it work in practice? Would you use it? Can you identify library types where this would break? Do you see additional problems? Discuss!

    Read the article

  • Arraylist is null; I cannot access books in the arraylist

    - by user3701380
    I am a beginner-intermediate java programmer and I am getting a null pointer exception from my arraylist. I am writing a bookstore program for APCS and when i add the book, it is supposed to add to the arraylist in the inventory class. But when i call a method to search for a book (e.g. by title), it shows that there isn't anything in the arraylist. //Here is my inventory class -- it has all methods for adding the book or searching for one The searching methods are in getBookByTitle, getBookByAuthor, and getBookByISBN and the method for adding a book is addBook package webbazonab; //Inventory Class //Bharath Senthil //Ansh Sikka import java.util.ArrayList; public class Inventory{ private ArrayList<Book> allBooks = new ArrayList<Book>(); private String bookTitles; private String bookAuthors; private String bookPrices; private String bookCopies; private String ISBNs; public Inventory() { } //@param double price, int copies, String bookTitle, String Author, String isbnNumber public void addBooks(Book addedBook){ allBooks.add(addedBook); } public boolean isAvailable(){ for(Book myBook : allBooks){ if(myBook.copiesLeft() == 0) return false; } return true; } public String populateTitle(){ for (Book titleBooks : allBooks){ bookTitles = titleBooks.getTitle() + "\n"; return bookTitles; } return bookTitles; } public String populateAuthor(){ for(Book authorBooks : allBooks){ bookAuthors = authorBooks.getAuthor() + "\n"; return bookAuthors; } return bookAuthors; } public String populatePrice(){ for (Book pricedBooks : allBooks){ bookPrices = String.valueOf(pricedBooks.getPrice()) + "\n"; } return "$" + bookPrices; } /** * * @return */ public String populateCopies(){ for (Book amtBooks : allBooks){ bookCopies = String.valueOf(amtBooks.copiesLeft()) + "\n"; return bookCopies; } return bookCopies; } public String populateISBN(){ for (Book isbnNums : allBooks){ ISBNs = isbnNums.getIsbn() + "\n"; return ISBNs; } return ISBNs; } @SuppressWarnings("empty-statement") public Book getBookByTitle(String titleSearch) { for(Book titleBook : allBooks) { if (titleBook.getTitle().equals(titleSearch)) { return titleBook; } } return null; } public Book getBookByISBN(String isbnSearch){ for(Book isbnBookSearches : allBooks){ if(isbnBookSearches.getIsbn().equals(isbnSearch)){ return isbnBookSearches; } } return null; } public Book getBookByAuthor(String authorSearch){ for(Book authorBookSearches : allBooks){ if(authorBookSearches.getAuthor().equals(authorSearch)){ return authorBookSearches; } } return null; } public void sort(){ for(int i = 0; i < allBooks.size(); i++) { for(int k = 0; k < allBooks.size(); k++) { if(((Book) allBooks.get(i)).getIsbn().compareTo(((Book) allBooks.get(k)).getIsbn()) < 1) { Book temp = (Book) allBooks.get(k); allBooks.set(k, allBooks.get(i)); allBooks.set(i, temp); } else if(((Book) allBooks.get(i)).getIsbn().compareTo(((Book) allBooks.get(k)).getIsbn()) > 1) { Book temp = (Book) allBooks.get(i); allBooks.set(i, allBooks.get(k)); allBooks.set(k, temp); } } } } public ArrayList<Book> getBooks(){ return allBooks; } } //The exception occurs when i call the method here (in another class): Inventory lib = new Inventory(); jTextField12.setText(lib.getBookByAuthor(authorSearch).getTitle()); Here is my book class if you need it package webbazonab; //Webbazon AB //Project By: Ansh Sikka and Bharath Senthil public class Book { private double myPrice; private String myTitle; private String bookAuthor; private String isbn; private int myCopies; public Book(double price, int copies, String bookTitle, String Author, String isbnNumber) { myPrice = price; myCopies = copies; myTitle = bookTitle; bookAuthor = Author; isbn = isbnNumber; } public double getPrice() { return myPrice; } public String getIsbn() { return isbn; } public String getTitle() { return myTitle; } public String getAuthor() { return bookAuthor; } public int copiesLeft(){ return myCopies; } public String notFound(){ return "The book you searched for could not be found!"; } public String toString() { return "Title: " + getTitle() + "\nAuthor: " + getAuthor() + "\nNumber of Available Books: " + copiesLeft() + "\nPrice: $" + getPrice(); } } Thanks!

    Read the article

  • Ruby number_to_currency displays a totally wrong number!

    - by SueP
    Greetings! I’m an old Delphi programmer making the leap to the Mac, Ruby, Rails, and web programming in general. I’m signed up for the Advanced Rails workshop at the end of the month. In the meantime, I’ve been working on porting a mission-critical (of course) app from Delphi to RAILS. It feels like I’ve spent most of the past year with my head buried in a book or podcast. Right now, I’ve hit a major issue and I’m tearing my hair out. I literally don’t know where to go with this, I desperately don’t want to deploy with this bug, and I’m feeling a bit frantic. (The company database is currently running on an ancient XP box that’s looking rustier by the day.) So, I set up a test database that shows the problem. I’m running: OS/X 10.6.3 Rails 2.3.5 ruby 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] MySQL 5.1.38-log via socket MySQL Client Version 5.1.8 ActiveRecord::Schema.define(:version => 20100406222528) do create_table “money”, :force => true do |t| t.decimal “amount_due”, :precision => 10, :scale => 2, :default => 0.0 t.decimal “balance”, :precision => 10, :scale => 2, :default => 0.0 t.text “memofield” t.datetime “created_at” t.datetime “updated_at” end The index view is right out of the generator, slightly modified to add the formatting that's breaking on me. Listing money <table> <tr> <th>Amount</th> <th>Amount to_s </th> <th>Balance to $</th> <th>Balance with_precision </th> <th>Memofield</th> </tr> <% @money.each do |money| %> <tr> <td><%=h money.amount_due %></td> <td><%=h money.amount_due.to_s(‘F’) %></td> <td><%=h number_to_currency(money.balance) %></td> <td><%=h number_with_precision(money.balance, :precision => 2) %></td> <td><%=h money.memofield %></td> <td><%= link_to ‘Show’, money %></td> <td><%= link_to ‘Edit’, edit_money_path(money) %></td> <td><%= link_to ‘Destroy’, money, :confirm => ‘Are you sure?’, :method => :delete %></td> </tr> *<% end %> *</table> <%= link_to ‘New money’, new_money_path %> This seemed to work pretty well. Then I started testing with production data and hit a major problem with number_to_currency. The number in the database is: 10542.28, I verified it with the MySQL Query Browser. RAILS will display this as 10542.28 unless I call number_to_currency, then that number is displayed as: $15422.80 The error seems to happen with any number between 10,000.00 and 10,999.99 So far, I haven’t seen it outside of that range, but I obviously haven’t tested everything. I guess my workaround is to remove number_to_currency, but that leaves the views looking really sloppy and unprofessional. The formatting is messed up, things don’t line up properly and I can’t force the display to 2 decimal places. I’m seriously hoping there is an easy fix for this. I can’t imagine this being a widespread problem. It would affect so many people that someone would have fixed it! But I don’t know where to go from here. I’d desperately like some help. (Later – number_with_precision fails the same way number_to_currency does.) Sue Petersen

    Read the article

  • Assembly - Read next sector of a virtual disk

    - by ali
    As any programmer in the world at least once in his/her life, I am trying to create my "revolutionary", the new and only one operating system. :D Well, I am using a virtual emulator (Oracle VM Virtual Box), for which I create a new unknwon operating system, with a vmdk disk. I like vmdk because they are just plain files, so I can paste my boot-loader over the first 512 bytes of the virtual hard disk. Now, I am trying to read the next sector of this virtual disk, on which I would paste a simple kernel that would display a message. I have two questions: Am I reading the second segment (the first -512 bytes- is occupied by the bootloader) correctly? CODE: CitesteDisc: mov bx, 0x8000 ; segment mov es, bx mov bx, 0x0000 ; offset mov ah, 0x02 ; read function mov al, 0x01 ; sectors - this might be wrong, trying to read from hd mov ch, 0x00 ; cylinder mov cl, 0x02 ; sector mov dh, 0x00 ; head mov dl, 0x80 ; drive - trying to read from hd int 0x13 ; disk int mov si, ErrorMessage ; - This will display an error message jc ShowMessage jmp [es:bx] ; buffer Here, I get the error message, after checking CF. However, if I use INT 13, 1 to get last status message, AL is 0 - so no error is saved. Am I pasting my simple kernel in the correct place inside the vmdk? What I do is pasting it after the 512th byte of the file, the first 512 bytes, as I said, are the boot-loader. The file would look like this: BE 45 7C E8 16 00 EB FE B4 0E B7 00 B3 07 CD 10 <- First sector C3 AC 08 C0 74 05 E8 EF FF EB F6 C3 B4 00 B2 80 CD 13 BE 5D 7C 72 F5 BB 00 80 8E C3 BB 00 00 B4 02 B0 06 B5 00 B1 01 B6 00 B2 07 CD 13 BE 4E 7C 72 CF 26 FF 27 57 65 6C 63 6F 6D 65 21 00 52 65 61 64 69 6E 67 20 65 72 72 6F 72 21 00 52 65 73 65 74 74 69 6E 67 20 65 72 72 6F 72 21 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 AA <- Boot-loader signature B4 0E B0 2E CD 10 EB FE 00 00 00 00 00 00 00 00 <- Start of the second sector 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 So, this is the way I am trying to add the kernel to the second sector. What do you think is wrong with this? Thanks!

    Read the article

  • Multiset of shared_ptrs as a dynamic priority queue: Concept and practice

    - by Sarah
    I was using a vector-based priority queue typedef std::priority_queue< Event, vector< Event >, std::greater< Event > > EventPQ; to manage my Event objects. Now my simulation has to be able to find and delete certain Event objects not at the top of the queue. I'd like to know if my planned work-around can do what I need it to, and if I have the syntax right. I'd also like to know if dramatically better solutions exist. My plan is to make EventPQ a multiset of smart pointers to Event objects: typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; I'm borrowing functions of the Event class from a related post on a multimap priority queue. // Event.h #include <cstdlib> using namespace std; #include <set> #include <boost/shared_ptr.hpp> class Event; typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; class Event { public: Event( double t, int eid, int hid ); ~Event(); void add( EventPQ& q ); void remove(); bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } bool operator > ( const Event & rhs ) const { return ( time > rhs.time ); } double time; int eventID; int hostID; EventPQ* mq; EventPQ::iterator mIt; }; // Event.cpp Event::Event( double t, int eid, int hid ) { time = t; eventID = eid; hostID = hid; } Event::~Event() {} void Event::add( EventPQ& q ) { mq = &q; mIt = q.insert( boost::shared_ptr<Event>(this) ); } void Event::remove() { mq.erase( mIt ); mq = 0; mIt = EventPQ::iterator(); } I was hoping that by making EventPQ a container of pointers, I could avoid wasting time copying Events into the container and avoid accidentally editing the wrong copy. Would it be dramatically easier to store the Events themselves in EventPQ instead? Does it make more sense to remove the time keys from Event objects and use them instead as keys in a multimap? Assuming the current implementation seems okay, my questions are: Do I need to specify how to sort on the pointers, rather than the objects, or does the multiset automatically know to sort on the objects pointed to? If I have a shared_ptr ptr1 to an Event that also has a pointer in the EventPQ container, how do I find and delete the corresponding pointer in EventPQ? Is it enough to .find( ptr1 ), or do I instead have to find by the key (time)? Is the Event::remove() sufficient for removing the pointer in the EventPQ container? There's a small chance multiple events could be created with the same time (obviously implied in the use of multiset). If the find() works on event times, to avoid accidentally deleting the wrong event, I was planning to throw in a further check on eventID and hostID. Does this seem reasonable? (Dumb syntax question) In Event.h, is the declaration of dummy class Event;, then the EventPQ typedef, and then the real class Event declaration appropriate? I'm obviously an inexperienced programmer with very spotty background--this isn't for homework. Would love suggestions and explanations. Please let me know if any part of this is confusing. Thanks.

    Read the article

  • how to define div or table cell height depending on the height of other divs or cells

    - by John
    I want to have a web page that contains 3 parts: A header at the top of the page , a footer (both of which having specific height in px)and the main part of the page which should be a div or table cell with the appropriate height attribute in order to take all the available space between them. I want the page to take 100% of the browser window height, trying to avoid scrollbars. The problems I have are the following: USING DIVs a) If I set the maindiv height to 100%, the page overflows and I get a vertical scrolbar. (the maindiv's height is set to the 100% of the browser window) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html{ height: 100%; max-height:100%; width: 100%; margin:0; padding:0; } div{padding:0;margin:0;} #containerdiv{height:100%;width:100%;background-color:#FF9;border:0;} #headerdiv{height:150px;width:100%;background-color:#0F0;border:0;} #footerdiv{height:50px;width:100%;background-color:#00F;border:0; } #maindiv{ background-color:#F00; height:100%; } div{border:#000 medium solid;border:0;} </style> <body> <div id="containerdiv"> <div id="headerdiv">headerdiv</div> <div id="maindiv">maindiv</div> <div id="footerdiv">footerdiv</div> </div> </body> </html> b) If I set the maindiv height to auto, the maindiv height is depending on it's content, which is not what I want. USING tables a) If I set the main cell height to 100% it works fine with Firefox but in Internet Explorer 8 I get a vertical scrollbar (you can use the next code block using th style="height:100%" instead of "auto" to see this.) b) If I set the main cell to auto it seems to be working both in IE and FF but then I have the problem that anything I put inside the maincell (table or div) cannot get maincell's full height in IE. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html, table{ height: 100%; width: 100%; margin:0; padding:0; } table{border:#000 0px solid} </style> <body> <table style="background:#063" cellpadding="0" cellspacing="0" border="0"> <tr><th style="height:150px;background-color:#FF0"></th></tr> <tr> <th style="height:auto"><table style="background:#0FF;" cellpadding="0" cellspacing="0" border="0"><tr><th style="height:auto">nested cell</th></tr></table> </th> </tr> <tr><th style="height:50px;background-color:#FF0"></th></tr> </table> </body> </html> </html> Any ideas? Maybe there is an easier way to define the size of the main part of the page in px using javascript? (my javascript skills are pretty poor so any help with this is welcome!)

    Read the article

  • Does this language feature already exist?

    - by Pindatjuh
    I'm currently developing a new language for programming in a continuous environment (compare it to electrical engineering), and I've got some ideas on a certain language construction. Let me explain the feature by explanation and then by definition: x = a U b; Where x is a variable and a and b are other variables (or static values). This works like a union between a and b; no duplicates and no specific order. with(x) { // regular 'with' usage; using the global interpretation of "x" x = 5; // will replace the original definition of "x = a U b;" } with(x = a) { // this code block is executed when the "x" variable // has the "a" variable assigned. All references in // this code-block to "x" are references to "a". So saying: x = 5; // would only change the variable "a". If the variable "a" // later on changes, x still equals to 5, in this fashion: // 'x = a U b U 5;' // '[currentscope] = 5;' // thus, 'a = 5;' } with(x = b) { // same but with "b" } with(x != a) { // here the "x" variable refers to any variable // but "a"; thus saying x = 5; // is equal to the rewriting of // 'x = a U b U 5;' // 'b = 5;' (since it was the scope of this block) } with(x = (a U b)) { // guaranteed that "x" is 'a U b'; interacting with "x" // will interact with both "a" and "b". x = 5; // makes both "a" and "b" equal to 5; also the "x" variable // is updated to contain: // 'x = a U b U 5;' // '[currentscope] = 5;' // 'a U b = 5;' // and thus: 'a = 5; b = 5;'. } // etc. In the above, all code-blocks are executed, but the "scope" changes in each block how x is interpreted. In the first block, x is guaranteed to be a: thus interacting with x inside that block will interact on a. The second and the third code-block are only equal in this situation (because not a: then there only remains b). The last block guarantees that x is at least a or b. Further more; U is not the "bitwise or operator", but I've called it the "and/or"-operator. Its definition is: "U" = "and" U "or" (On my blog, http://cplang.wordpress.com/2009/12/19/binop-and-or/, there is more (mathematical) background information on this operator. It's loosely based on sets. Using different syntax, changed it in this question.) Update: more examples. print = "Hello world!" U "How are you?"; // this will print // both values, but the // order doesn't matter. // 'userkey' is a variable containing a key. with(userkey = "a") { print = userkey; // will only print "a". } with(userkey = ("shift" U "a")) { // pressed both "shift" and the "a" key. print = userkey; // will "print" shift and "a", even // if the user also pressed "ctrl": // the interpretation of "userkey" is changed, // such that it only contains the matched cases. } with((userkey = "shift") U (userkey = "a")) { // same as if-statement above this one, showing the distributivity. } x = 5 U 6 U 7; y = x + x; // will be: // y = (5 U 6 U 7) + (5 U 6 U 7) // = 10 U 11 U 12 U 13 U 14 somewantedkey = "ctrl" U "alt" U "space" with(userkey = somewantedkey) { // must match all elements of "somewantedkey" // (distributed the Boolean equals operated) // thus only executed when all the defined keys are pressed } with(somewantedkey = userkey) { // matches only one of the provided "somewantedkey" // thus when only "space" is pressed, this block is executed. } Update2: more examples and some more context. with(x = (a U b)) { // this } // can be written as with((x = a) U (x = b)) { // this: changing the variable like x = 5; // will be rewritten as: // a = 5 and b = 5 } Some background information: I'm building a language which is "time-independent", like Java is "platform-independant". Everything stated in the language is "as is", and is continuously actively executed. This means; the programmer does not know in which order (unless explicitly stated using constructions) elements are, nor when statements are executed. The language is completely separated from the "time"-concept, i.e. it's continuously executed: with(a < 5) { a++; } // this is a loop-structure; // how and when it's executed isn't known however. with(a) { // everytime the "a" variable changes, this code-block is executed. b = 4; with(b < 3) { // runs only three times. } with(b > 0) { b = b - 1; // runs four times } } Update 3: After pondering on the type of this language feature; it closely resemblances Netbeans Platform's Lookup, where each "with"-statement a synchronized agent is, working on it's specific "filter" of objects. Instead of type-based, this is variable-based (fundamentally quite the same; just a different way of identifiying objects). I greatly thank all of you for providing me with very insightful information and links/hints to great topics I can research. Thanks. I do not know if this construction already exists, so that's my question: does this language feature already exist?

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • Why didn't 12.04 install?

    - by Josephisscrewed
    Ok, so I've installed Ubuntu many times on my computer.. Normally on the same partition, and WIndows would always delete Ubuntu(I don't know how.. it just happens) if i go away from keyboard during boot and it chooses Windows automatically because I took to long. So i tried to reinstall again, but after the fifth time it wouldn't let me, and told me to check "wubi-12.04-rev266.log". It took a while to find, but when i found it, I had no idea what any of it meant, as I'm no programmer.I first tried this the day Precise Pangolin came out. SO skip ahead 2.5 months, when I finally found this file, and i then got the idea of making a new partition to install Ubuntu on, but I used wubi, like I always did. It didn't look like it would f anything up, so I did it. it went through all the downloads, extracting, etc. Which took about 40 minutes total, then ended with an error message saying to check "wubi-12.04-rev266.log". i did. Here's what it says: 07-10 23:33 INFO root: === wubi 12.04 rev266 === 07-10 23:33 DEBUG root: Logfile is c:\users\joseph\appdata\local\temp\wubi-12.04-rev266.log 07-10 23:33 DEBUG root: sys.argv = ['main.pyo', '--exefile="C:\\Users\\Joseph\\Downloads\\wubi.exe"'] 07-10 23:33 DEBUG CommonBackend: data_dir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data 07-10 23:33 DEBUG WindowsBackend: 7z=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\bin\7z.exe 07-10 23:33 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 07-10 23:33 DEBUG CommonBackend: Fetching basic info... 07-10 23:33 DEBUG CommonBackend: original_exe=C:\Users\Joseph\Downloads\wubi.exe 07-10 23:33 DEBUG CommonBackend: platform=win32 07-10 23:33 DEBUG CommonBackend: osname=nt 07-10 23:33 DEBUG CommonBackend: language=en_US 07-10 23:33 DEBUG CommonBackend: encoding=cp1252 07-10 23:33 DEBUG WindowsBackend: arch=amd64 07-10 23:33 DEBUG CommonBackend: Parsing isolist=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data\isolist.ini 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-amd64 07-10 23:33 DEBUG WindowsBackend: Fetching host info... 07-10 23:33 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 07-10 23:33 DEBUG WindowsBackend: windows version=vista 07-10 23:33 DEBUG WindowsBackend: windows_version2=Windows 7 Home Premium 07-10 23:33 DEBUG WindowsBackend: windows_sp=None 07-10 23:33 DEBUG WindowsBackend: windows_build=7600 07-10 23:33 DEBUG WindowsBackend: gmt=-8 07-10 23:33 DEBUG WindowsBackend: country=US 07-10 23:33 DEBUG WindowsBackend: timezone=America/Los_Angeles 07-10 23:33 DEBUG WindowsBackend: windows_username=Joseph 07-10 23:33 DEBUG WindowsBackend: user_full_name=Joseph 07-10 23:33 DEBUG WindowsBackend: user_directory=C:\Users\Joseph 07-10 23:33 DEBUG WindowsBackend: windows_language_code=1033 07-10 23:33 DEBUG WindowsBackend: windows_language=English 07-10 23:33 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 07-10 23:33 DEBUG WindowsBackend: bootloader=vista 07-10 23:33 DEBUG WindowsBackend: system_drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(D: hd 4303.48046875 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free udf) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(U: hd 79907.8320313 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: uninstaller_path=None 07-10 23:33 DEBUG WindowsBackend: previous_target_dir=None 07-10 23:33 DEBUG WindowsBackend: previous_distro_name=None 07-10 23:33 DEBUG WindowsBackend: keyboard_id=67699721 07-10 23:33 DEBUG WindowsBackend: keyboard_layout=us 07-10 23:33 DEBUG WindowsBackend: keyboard_variant= 07-10 23:33 DEBUG CommonBackend: python locale=('en_US', 'cp1252') 07-10 23:33 DEBUG CommonBackend: locale=en_US.UTF-8 07-10 23:33 DEBUG WindowsBackend: total_memory_mb=3893.859375 07-10 23:33 DEBUG CommonBackend: Searching ISOs on USB devices 07-10 23:33 DEBUG CommonBackend: Searching for local CDs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 INFO root: Running the installer... 07-10 23:33 DEBUG WindowsFrontend: __init__... 07-10 23:33 DEBUG WindowsFrontend: on_init... 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG WinuiInstallationPage: target_drive=U:, installation_size=30000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=joseph 07-10 23:35 INFO root: Received settings 07-10 23:35 DEBUG CommonBackend: Searching for local CD 07-10 23:35 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:35 DEBUG CommonBackend: Searching for local ISO 07-10 23:35 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG TaskList: # Running tasklist... 07-10 23:35 DEBUG TaskList: ## Running select_target_dir... 07-10 23:35 INFO WindowsBackend: Installing into U:\ubuntu 07-10 23:35 DEBUG TaskList: ## Finished select_target_dir 07-10 23:35 DEBUG TaskList: ## Running create_dir_structure... 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot\grub 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot\grub 07-10 23:35 DEBUG TaskList: ## Finished create_dir_structure 07-10 23:35 DEBUG TaskList: ## Running create_uninstaller... 07-10 23:35 DEBUG WindowsBackend: Copying uninstaller C:\Users\Joseph\Downloads\wubi.exe -> U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir U:\ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon U:\ubuntu\Ubuntu.ico 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 12.04-rev266 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 07-10 23:35 DEBUG TaskList: ## Finished create_uninstaller 07-10 23:35 DEBUG TaskList: ## Running create_preseed_diskimage... 07-10 23:35 DEBUG TaskList: ## Finished create_preseed_diskimage 07-10 23:35 DEBUG TaskList: ## Running get_diskimage... 07-10 23:35 DEBUG TaskList: New task download 07-10 23:35 DEBUG TaskList: ### Running download... 07-10 23:35 DEBUG downloader: downloading http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz > U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz 07-10 23:35 DEBUG downloader: Download start filename=U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz, url=http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz, basename=ubuntu-12.04-wubi-amd64.tar.xz, length=512730488, text=None 07-11 00:00 DEBUG TaskList: ### Finished download 07-11 00:00 DEBUG downloader: download finished (read 512730488 bytes) 07-11 00:00 DEBUG TaskList: ## Finished get_diskimage 07-11 00:00 DEBUG TaskList: ## Running extract_diskimage... 07-11 00:03 DEBUG TaskList: ## Finished extract_diskimage 07-11 00:03 DEBUG TaskList: ## Running choose_disk_sizes... 07-11 00:03 DEBUG WindowsBackend: total size=30000 root=29744 swap=256 home=0 usr=0 07-11 00:03 DEBUG TaskList: ## Finished choose_disk_sizes 07-11 00:03 DEBUG TaskList: ## Running expand_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished expand_diskimage 07-11 00:05 DEBUG TaskList: ## Running create_swap_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished create_swap_diskimage 07-11 00:05 DEBUG TaskList: ## Running modify_bootloader... 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ### Running modify_bcd... 07-11 00:05 DEBUG WindowsBackend: modify_bcd Drive(C: hd 78696.8203125 mb free ntfs) 07-11 00:05 ERROR TaskList: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: # Cancelling tasklist 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 ERROR root: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ## Finished modify_bootloader 07-11 00:05 DEBUG TaskList: # Finished tasklist What have I done wrong? What can I do? If I turn off my laptop, will I actually be able to turn it back on? If you want me to post the log from the first day it happened, i'd be glad to in the comments, in the main body it made it over 30000 characters.

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • CodePlex Daily Summary for Friday, May 28, 2010

    CodePlex Daily Summary for Friday, May 28, 2010New ProjectsBang: BangBox Office: Event Management for Community Theater Groups: Box Office is an event management web application to help theater groups manage & promote their shows. Manage performance schedules, sell tickets, ...CellsOnWeb: El espacio de las células del Programa Académico Microsoft en Argentina. CRM 4.0 Plugin Queue Item Counter: This is a crm 4.0 plugin to count queue items in each folder and display the number at the end of the name. For example, if the queue name is "Tes...Date Calculator: Date Calculator is a small desktop utility developed using Windows Forms .NET technology. This utility is analogous to the "Date calculation" modul...Enterprise Library Investigate: Enterprise Library Investigate ProjecteProject Management: Ứng dụng nền tảng web hỗ trợ quản lí và giám sát tiến độ dự án của tổ chức doanh nghiệp.Fiddler TreeView Panel Extension: Extension for Fiddler, to display the session information in a TreeView panel instead of the default ListBox, so it groups the information logicall...Git Source Control Provider: Git Source Control Provider is a Visual Studio Plug-in that integrates Git with Visual Studio.InspurProjects: Project on Inspur Co.Kryptonite: The Kryptonite project aims to improve development of websites based on the Kentico CMS. MLang .NET Wrapper: Detect the encoding of a text without BOM (Byte Order Mask) and choose the best Encoding for persistence or network transport of textMondaze: Proof of concept using Windows Azure.MultipointControls: A collection of controls that applied Windows Multipoint Mouse SDK. Windows Multipoint Mouse SDK enable app to have multiple mice interact simultan...Mundo De Bloques: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...MyRPGtests: Just some tests :)OffInvoice Add-in for MS Office 2010: Project Description: The project it's based in the ability to extend funtionality in the Microsoft Office 2010 suite.OpenGraph .NET: A C# client for the Facebook Graph API. Supports desktop, web, ASP.NET MVC, and Silverlight connections and real-time updates. PLEASE NOTE: I dis...Portable Extensible Metadata (PEM) Data Annotation Generator: This project intends to help developers who uses PEM - Portable Extensible Metadata for Entity Framework generating Data Annotation information fro...Production and sale of plastic window systems: Automation company produces window design, production and sale of plastic window systems, management of sales contracts and their execution, print ...Renjian Storm (Renjian Image Viewer Uploader): Renjian Image Viewer UploaderShark Web Intelligence CMS: Shark Web Intelligence Inc. Content Management System.Shuffleboard Game for Windows Phone 7: This is a sample Shuffleboard game written in Silverlight for Windows Phone 7. It demonstrates physics, procedural animation, perspective transform...Silverlight Property Grid: Visual Studio Style PropertyGrid for Silverlight.SvnToTfs: Simple tool that migrates every Subversion revision toward Team Foundation Server 2010. It is developed in C# witn a WPF front-end.Tamias: Basic Cms Mvc Contrib Portable Area: The goal of this project is to have a easy-to-integrate basic cms for ASP.NET MVC applications based on MVC Contrib Portable Areas.TwitBy: TwitBy is a Twitter client for anyone who uses Twitter. It's easy to use and all of the major features are there. More features to come. H...Under Construction: A simple site that can be used as a splash for sites being upgraded or developed. UO Editor: The Owner & Organisation Editor makes it easy to view and edit the names of the registered owner and registered organization for your Windows OS. N...webform2010: this is the test projectWireless Network: ssWiX Toolset: The Windows Installer XML (WiX) is a toolset that builds Windows installation packages from XML source code. The toolset supports a command line en...Xna.Extend: A collection of easy to use Xna components for aiding a game programmer in developing thee next big thing. I plan on using the components from this...New ReleasesA Guide to Parallel Programming: Drop 4 - Guide Preface, Chapters 1 - 5, and code: This is Drop 4 with Guide Preface, Chapters 1 - 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 ...Ajax Toolkit for ASP.NET MVC: MAT 1.1: MAT 1.1Community Forums NNTP bridge: Community Forums NNTP Bridge V09: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Community Forums NNTP bridge: Community Forums NNTP Bridge V10: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V11: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...CSS 360 Planetary Calendar: Beta Release: =============================================================================== Beta Release Version: 0.2 Description: This is the beta release de...Date Calculator: DateCalculator v1.0: This is the first release and as far as I know this is a stable version.eComic: eComic 2010.0.0.4: Version 2010.0.0.4 Change LogFixed issues in the "Full Screen Control Panel" causing it to lack translucence Added loupe magnification control ...Expression Encoder Batch Processor: Runtime Application v0.2: New in this version: Added more error handling if files not exist. Added button/feature to quit after current encoding job. Added code to handl...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.7: Initial compiled version of the assembly, ready to use. Please refer to http://fiddlertreeviewpanel.codeplex.com/ for instructions and installation.Gardens Point LEX: Gardens Point LEX v1.1.4: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.4 corre...Gardens Point Parser Generator: Gardens Point Parser Generator v1.4.1: Version 1.4.1 differs from version 1.4.0 only in containing a corrected version of a previously undocumented feature which allows the generation of...IsWiX: IsWiX 1.0.264.0: Build 1.0.264.0 - built against Fireworks 1.0.264.0. Adds support for autogenerating the SourceDir prepreprocessor variable and gives user choice t...Matrix: Matrix 0.5.2: Updated licenseMesopotamia Experiment: Mesopotamia 1.2.90: Release Notes - Ugraded to Microsoft Robotics Developer Studio 2008 R3 Bug Fixes - Fix to keep any sole organisms that penetrate to the next fitne...Microsoft Crm 4.0 Filtered Lookup: Microsoft Crm 4.0 Filtered Lookup: How to use: Allow passing custom querystring values: Create a DWORD registry key named [DisableParameterFilter] under [HKEY_LOCAL_MACHINE\SOFTWAR...MSBuild Extension Pack: May 2010: The MSBuild Extension Pack May 2010 release provides a collection of over 340 MSBuild tasks. A high level summary of what the tasks currently cover...MultiPoint Vote: MultiPointVote v.1: This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing the items listed line per line...Mundo De Bloques: Mundo de Bloques, Release 1: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...OffInvoice Add-in for MS Office 2010: OffInvoice for Office 2010 V1.0 Installer: Add-in for MS Word 2010 or MS Excel 2010 to allow the management (issuing, visualization and reception) of electronic invoices, based in the XML fo...OpenGraph .NET: 0.9.1 Beta: This is the first public release of OpenGraph .NET.patterns & practices: Composite WPF and Silverlight: Prism v2.2 - May 2010 Release: Composite Application Guidance for WPF and Silverlight - May 2010 Release (Prism V2.2) The Composite Application Guidance for WPF and Silverlight ...Portable Extensible Metadata (PEM) Data Annotation Generator: Release 49376: First release.Production and sale of plastic window systems: Yanuary 2009: NOTEBefore loading program, make sure you have installed MySQL and created DataBase that store in Source Code (look at below) Where Is The Source?...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.2: The current version of the Programmable Software Development Environment has the capability of reading an optional text file in each source develop...Rapidshare Episode Downloader: RED 0.8.6: - Fixed Edit form to actually save the data - Added Bypass Validation to enable future episodes - Added Search parameter to Edit form - Added refr...Renjian Storm (Renjian Image Viewer Uploader): Renjian Storm 0.6: 人间风暴 v0.6 稳定版sELedit: sELedit v1.1b: + Fixed: when export and import items to text files, there was a bug with "NULL" bytes in the unicode stringShake - C# Make: Shake v0.1.21: Changes: FileTask CopyDir method modified, see documentationSharePoint Labs: SPLab7001A-ENU-Level100: SPLab7001A-ENU-Level100 This SharePoint Lab will teach how to analyze and audit WSP files. WSP files are somewhere in a no man's land between ITPro...SharePoint Rsync List: 1.0.0.3: Fix spcontext dispose bug in menu try and run jobs only on central admin server mark a single file failure if file not copied don't delete destinat...Shuffleboard Game for Windows Phone 7: Shuffleboard 1.0.0.1: Source code, solution files, and assets.Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+04: Sw. Is Hw. Lib. 3.0.0.x+04SoulHackers Demon Unite(Chinese version): WPFClient pre alpha: can unite 2, 3 or more demons. can un-unite 1 demon to 2 demon (no triple un-unite yet).Team Deploy: Team Deploy 2010 R1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comple...Under Construction: Under Construction: All Files required to show under construction page. The Page will pull through the Domain name that the site is being run on this allows you to use...Unit Driven: Version 0.0.5: - Tests nested by namespace parts. - Run buttons properly disabled based on currently running tests. - Timeouts for async tests enabled.UO Editor: UO Editor v1.0: Initial ReleaseVCC: Latest build, v2.1.30527.0: Automatic drop of latest buildWeb Service Software Factory Contrib: Import WSDL 2010: Generate Service Contract models from existing WSDL documents for Web Service Software Factory 2010. Usage: Install the vsix and right click on a S...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationSqlServerExtensionsBlogEngine.NETRawrpatterns & practices: Windows Azure Security GuidanceCodeReviewCustomer Portal Accelerator for Microsoft Dynamics CRMIonics Isapi Rewrite Filter

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208  | Next Page >