Search Results

Search found 1158 results on 47 pages for 'rahul g i hate unicorns'.

Page 46/47 | < Previous Page | 42 43 44 45 46 47  | Next Page >

  • An "elegant" way of identifying a field?

    - by Alix
    Hi. I'm writing a system that underlies programmer applications and that needs to detect their access to certain data. I can mostly do so with properties, like this: public class NiceClass { public int x { get; set; } } Then I go in and tweak the get and set accessors so that they handle the accesses appropriately. However this requires that the users (application programmers) define all of their data as properties. If the users want to use pre-existing classes that have "normal" fields (as opposed to properties), I cannot detect those accesses. Example: public class NotSoNiceClass { public int y; } I cannot detect accesses to y. However, I want to allow the use of pre-existing classes. As a compromise the users are responsible for notifying me whenever an access to that kind of data occurs. For example: NotSoNiceClass notSoNice; ... Write(notSoNice.y, 0); // (as opposed to notSoNice.y = 0;) Something like that. Believe me, I've researched this very thoroughly and even directly analysing the bytecode to detect accesses isn't reliable due to possible indirections, etc. I really do need the users to notify me. And now my question: could you recommend an "elegant" way to perform these notifications? (Yes, I know this whole situation isn't "elegant" to begin with; I'm trying not to make it worse ;) ). How would you do it? This is a problem for me because actually the situation is like this: I have the following class: public class SemiNiceClass { public NotSoNiceClass notSoNice { get; set; } public int z { get; set; } } If the user wants to do this: SemiNiceClass semiNice; ... semiNice.notSoNice.y = 0; They must instead do something like this: semiNice.Write("notSoNice").y = 0; Where Write will return a clone of notSoNice, which is what I wanted the set accessor to do anyway. However, using a string is pretty ugly: if later they refactor the field they'll have to go over their Write("notSoNice") accesses and change the string. How can we identify the field? I can only think of strings, ints and enums (i.e., ints again). But: We've already discussed the problem with strings. Ints are a pain. They're even worse because the user needs to remember which int corresponds to which field. Refactoring is equally difficult. Enums (such as NOT_SO_NICE and Z, i.e., the fields of SemiNiceClass) ease refactoring, but they require the user to write an enum per class (SemiNiceClass, etc), with a value per field of the class. It's annoying. I don't want them to hate me ;) So why, I hear you ask, can we not do this (below)? semiNice.Write(semiNice.notSoNice).y = 0; Because I need to know what field is being accessed, and semiNice.notSoNice doesn't identify a field. It's the value of the field, not the field itself. Sigh. I know this is ugly. Believe me ;) I'll greatly appreciate suggestions. Thanks in advance! (Also, I couldn't come up with good tags for this question. Please let me know if you have better ideas, and I'll edit them)

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • uiscrollview not switching image subviews

    - by nickthedude
    I'm building a comic viewer app, that consists of two view controllers, the root viewcontroller basically displays a view where a user decides what comic they want to read by pressing a button. The second viewController actually displays the comic as a uiscrollview with a toolbar and a title at the top. So the problem I am having is that the comic image panels themselves are not changing from whatever the first comic you go to if you select another comic after viewing the first one. The way I set it up, and I admit it's not exactly mvc, so please don't hate, anyway the way I set it up is each comic uiscrollview consists of x number of jpg images where each comic set's image names have a common prefix and then a number like 'funny1.jpg', 'funny2.jpg', 'funny3.jpg' and 'soda1.jpg', 'soda2.jpg', 'soda3.jpg', etc... so when a user selects a comic to view in the root controller it makes a call to the delegate and sets ivars on instances of the comicviewcontroller that belongs to the delegate (mainDelegate.comicViewController.property) I set the number of panels in that comic, the comic name for the title label, and the image prefix. The number of images changes(or at least the number that you can scroll through), and the title changes but for some reason the images are the same ones as whatever comic you clicked on initially. I'm basing this whole app off of the 'scrolling' code sample from apple. I thought if I added a viewWillAppear:(BOOL) animated call to the comicViewController everytime the user clicked the button that would fix it but it didn't, after all that is where the scrollview is laid out. Anyway here is some code from each of the two controllers: RootController: -(IBAction) launchComic2{ AppDelegate *mainDelegate = [(AppDelegate *) [UIApplication sharedApplication] delegate]; mainDelegate.myViewController.comicPageCount = 3; mainDelegate.myViewController.comicTitle.text = @"\"Death by ETOH\""; mainDelegate.myViewController.comicImagePrefix = @"etoh"; [mainDelegate.myViewController viewWillAppear:YES]; [mainDelegate.window addSubview: mainDelegate.myViewController.view]; comicViewController: -(void) viewWillAppear:(BOOL)animated { self.view.backgroundColor = [UIColor viewFlipsideBackgroundColor]; // 1. setup the scrollview for multiple images and add it to the view controller // // note: the following can be done in Interface Builder, but we show this in code for clarity [scrollView1 setBackgroundColor:[UIColor whiteColor]]; [scrollView1 setCanCancelContentTouches:NO]; scrollView1.indicatorStyle = UIScrollViewIndicatorStyleWhite; scrollView1.clipsToBounds = YES; // default is NO, we want to restrict drawing within our scrollview scrollView1.scrollEnabled = YES; // pagingEnabled property default is NO, if set the scroller will stop or snap at each photo // if you want free-flowing scroll, don't set this property. scrollView1.pagingEnabled = YES; // load all the images from our bundle and add them to the scroll view NSUInteger i; for (i = 1; i <= self.comicPageCount; i++) { NSString *imageName = [NSString stringWithFormat:@"%@%d.jpg", self.comicImagePrefix, i]; NSLog(@"%@%d.jpg", self.comicImagePrefix, i); UIImage *image = [UIImage imageNamed:imageName]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; // setup each frame to a default height and width, it will be properly placed when we call "updateScrollList" CGRect rect = imageView.frame; rect.size.height = kScrollObjHeight; rect.size.width = kScrollObjWidth; imageView.frame = rect; imageView.tag = i; // tag our images for later use when we place them in serial fashion [scrollView1 addSubview:imageView]; [imageView release]; } [self layoutScrollImages]; // now place the photos in serial layout within the scrollview } - (void)layoutScrollImages { UIImageView *view = nil; NSArray *subviews = [scrollView1 subviews]; // reposition all image subviews in a horizontal serial fashion CGFloat curXLoc = 0; for (view in subviews) { if ([view isKindOfClass:[UIImageView class]] && view.tag 0) { CGRect frame = view.frame; frame.origin = CGPointMake(curXLoc, 0); view.frame = frame; curXLoc += (kScrollObjWidth); } } // set the content size so it can be scrollable [scrollView1 setContentSize:CGSizeMake((self.comicPageCount * kScrollObjWidth), [scrollView1 bounds].size.height)]; } Any help would be appreciated on this. Nick

    Read the article

  • Chaining CSS classes in IE6 - Trying to find a jQuery solution?

    - by Mike Baxter
    Right, perhaps I ask the impossible? I consider myself fairly new to Javscript and jQuery, but that being said, I have written some fairly complex code recently so I am definitely getting there... however I am now possed with a rather interesting issue at my current freelance contract. The previous web coder has taken a Grid-960 approach to the HTML and as a result has used chained classes to style many of the elements. The example below is typical of what can be found in the code: <div class='blocks four-col-1 orange highlight'>Some content</div> And in the css there will be different declarations for: (not actual css... but close enough) .blocks {margin-right:10px;} .orange {background-image:url(someimage.jpg);} .highlight {font-weight:bold;} .four-col-1 {width:300px;} and to make matters worse... this is in the CSS: .blocks.orange.highlight {background-colour:#dd00ff;} Anyone not familiar with this particular bug can read more on it here: http://www.ryanbrill.com/archives/multiple-classes-in-ie/ it is very real and very annoying. Without wanting to go into the merrits of not chaining classes (I told them this, but it is no longer feasible to change their approach... 100 hand coded pages into a 150 page website, no CMS... sigh) and without the luxury of being able to change the way these blocks are styled... can anyone advise me on the complexity and benefits between any of my below proposed approaches or possible other options that would adequately solve this problem. Potential Solution 1 Using conditional comments I am considering loading a jquery script only for IE6 that: Reads the class of all divs in a certain section of the page and pushes to an array creates empty boxes off screen with only one of the classes applied at a time Reads the applied CSS values for each box Re-applies these styles to the individual box, somehow bearing in mind the order in which they are called and overwriting conflicting instructions as required Potential Solution 2 read the class of all divs in a certain section of the page and push to an array Scan the document for links to style sheets Ajax grab the stylesheets and traverse looking for matching names to those in class array Apply styles as needed Potential Solution 3 Create an IE6 only stylesheet containing the exact style to be applied as a unique name (ie: class='blocks orange highlight' becomes class='blocks-orange-highlight') Traverse the document in IE6 and convert all spaces in class declarations to hyphens and reapply classes based on new style name Summary: Solution 1 allows the people at this company to apply any styles in the future and the script will adjust as needed. However it does not allow for the chained style to be added, only the individual style... it is also processor intensive and time consuming, but also the most likely to be converted into a plugin that could be used the world over Solution 2 is a potential nightmare to code. But again will allow for an endless number of updates without breaking Solution 3 will require someone at the companty to hardcode the new styles every time they make a change, and if they don't, IE6 will break. Ironically the site, whilst needing to conform to IE6 in a limited manner, does not need to run wihtout javascript (they've made the call... have JS or go away), so consider all jQuery and JS solutions to be 'game on'. Did I mention how much i hate IE6? Anyway... any thoughts or comments would be appreciated. I will continue to develop my own solution and if I discover one that can be turned into a jQuery plugin I will post it here in the comments. Regards, Mike.

    Read the article

  • PROJECT HELP NEEDED. SOME BASIC CONCEPTS GREAT CONFUSION BECAUSE OF LACK OF PROPER MATERIAL PLEASE H

    - by user287745
    Task ATTENDENCE RECORDER AND MANAGEMENT SYSTEM IN DISTRIBUTED ENVIRONMENT ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Example implementation needed. a main server in each lab where the operator punches in the attendence of the student. =========================================================== scenerio:- a college, 10 departments, all departments have a computer lab with 60-100 computers, the computers within each lab are interconnected and all computers in any department have to dail to a particular number (THE NUMBER GIVEN BY THE COLLEGE INTERNET DEPARTMENT) to get connected to the internet. therefore safe to assume that there is a central location to which all the computers in the college are connected to. there is a 'students attendence portal' which can be accessed using internet explorer, students enter there id and get the particular attendence record regarding to the labs only. a description of the working is like:- 1) the user will select which department, which year has arrived to the lab 2) the selection will give the user a return of all the students name and there roll numbers belonging to that department; 'with a check box to "TICK MARK IF THE STUDENT IS PRESENT" ' 3) A SUBMIT BUTTON when pressed reads the 'id' of the checkbox to determine the "particular count number of the student" from that an id of the student is constructed and that id is inserted with a present. (there is also date and time and much more to normalize the db and to avoid conflicts and keep historic records etc but that you will have to assume) steps taken till this date:- ( please note we are not computer students, we are to select something of some other line as a project!, as you will read in my many post 'i" have designed small websites just out of liking. have never ever done any thing official to implement like this.) * have made the database fully normalized. * have made the website which does the functions required on the database. Testing :- deployed the db and site on a free aspspider server and it worked. tested from several computers. Now the problem please help thank youuuuuuuu a practical demonstration has to be done within the college network. no internet! we have been assigned a lab - 60 computers- to demonstrate. (please dont give replies as 60 computers only! is not a big deal one CPU can manage it. i know that; IT IS A HYPOTHETICAL SITUATION WHERE WE ASSUME THAT 60 IS NOT 60 BUT ITS LIKE 60,000 COMPUTERS) 1a) make a web server, yes iis and put files in www folder and configure server to run aspx files- although a link to a step by step guide will be appreciated)\ ? which version of windows should i ask for xp or win server 2000 something? 2a) make a database server. ( well yes install sql server 2005, okay but then what? just put the database file on a pc share it and append the connection string to the share? ) 3a) make the site accessible from the remaining computers ? http://localhost/sitename ? all users "being operators of the particular lab" have the right to edit, write or delete(in dispute), thereby any "users" who hate our program can make the database inconsistent by accessing te same record and doing different edits and then complaining? so how to prevent this? you know something like when the db table is being written to others can only read but not write.. one big confusion:- IN DISTRIBUTED ENVIRONMENT "how to implement this" where does "distributed environment" come in! meaning :- alright the labs are in different departments but the "database server will be one" the "web server will be one" so whats distributed!?

    Read the article

  • Different Flavors of Leases Back On

    - by Theresa Hickman
    Given the continued interest regarding the proposed changes to Lease Accounting, I decided to write another entry on this controversial topic with colorful commentary from our resident accounting expert, Seamus Moran. Background (A History Lesson) Back in 1976, the FASB issued FAS 13, “Accounting for Leases” that permitted leases to be either an operating lease or capital (finance) lease. In substance, operating leases are a form of off-balance sheet financing. According to Seamus, operating leases date back to the launch of the Boeing 707 in the 1950s.  Because the aircraft was so much more expensive than previous aircrafts, the industry came up with the operating lease concept to accommodate these jet liners that dominated air transport.  How it worked was the bank would buy the plane and lease it to the airline.  Because the bank never controlled or flew the plane, they never placed the asset on their balance sheet, and because the airline never owned the plane, they didn’t place it on their balance sheet either. They simply treated the monthly lease payments as rental expenses on the P&L.   August 2010 Original Lease Accounting Changes In August 2010, FASB and IASB decided to overhaul lease accounting as part of their joint commitment “to insure that investors and other users of financial statements are provided useful, transparent, and complete information about leasing transactions in the financial statements.”  Some say that the current lease accounting standards are broken because it keeps assets off the balance sheet, hidden from investors’ view. The original proposal abolished operating leases and only permitted capital leases where all leases would be recorded on the balance sheet as assets and liabilities. The asset side would reflect the right to use the asset for the leased term, and the liability side would reflect the obligation to make lease payments.   Why Companies Were Freaking Out According to the SEC, the financial impact of the aforementioned lease changes was estimated to add more than $1.3 trillion of operating lease obligations to corporate balance sheets. Many companies in various industries, especially retail, are concerned because the changes are significant and will impact existing leases with no grandfather clause for existing operating leases. Of course, the banks and airlines I mentioned earlier really hate this because neither wants to report the airplane (now costing around $60 M) as an asset. Regular companies were concerned that they would have to report routine short term leases of real estate or equipment as fixed assets, even though they were really just longer term rentals.  One company we spoke to leased roadside billboards, and really did not consider them to be fixed assets in any way. Obviously, these changes would have had a profound and lasting effect on a company’s financial and real estate strategies and significantly impact its financial statements.  Financial statements would show higher depreciation and interest expense with significantly higher total assets and debt. In terms of financial metrics, they’re negatively impacted. It would raise a company’s debt-to-capital ratio to reflect the higher debt compared to equity, it would negatively impact their return-on-assets because now companies will appear more asset intensive, and it will decrease EPS, lowering shareholder ROI. Feb. 2011 Recent Update The comment period on leases closed in December 2010. The FASB and the IASB have met several times since then and published their initial responses to the input they received from the various interested parties.  They are “redeliberating” the principles involved in Lease Accounting.  Some of the issues they are looking at include: The core definition of a lease.  This will articulate principles on what is a lease and what is “not-a-lease.” One theory or supposition is that they might define a lease as the transfer of certain but not all major ownership attributes for a certain period of time.  So a year’s lease of an aircraft might be a “lease,” but a year’s lease of half a floor in an office building would be “not-a-lease.”  The ownership attributes transferred from the core owner to the user are different; the airline must maintain, paint, and do whatever it needs to do on the aircraft. However, the office renter will have strictly limited rights in respect to the rented space. The differences between a lease contract and service contract.  Even if they call them “leases” for the purpose of commercial law, a service contract might not be accounted for as a lease. The accounting to be done by the lessee.  They would define when the bank or landlord would retain the asset on their balance sheet, and perhaps by implication, when the lessor would not need to include the asset on theirs.  So if the finance house keeps the airplane or office on their balance sheet, the tenant doesn’t need to.  I’m not sure that I can draw the opposite conclusion where the finance house doesn’t report but the tenant must. The difference, if any, between a financing lease and other leases, and the implications to the accounting. The present value calculation when renewable terms exist. They have reduced the circumstances in which one must look at the renewable terms of a lease in calculating the present value.  In most circumstances, you will use the lease term rather than the potential renewable term. Their latest discussion this past week with the contents of the discussion was not available at the time of me writing this entry.  For more details, the results of the discussions are posted on both the FASB and the IASB websites. Implied Software Changes Whatever the final rules turn out to be, all ERP systems, such as Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards, and Oracle Hyperion will need to change their software to accommodate the new rules. The following lists some changes that might have to be made to accounting software depending on what the final standards will be in June 2011: Lease tracking may require modifications with tracking of additional lease details that might require a centralized repository to maintain Accounting may need to be modified as there are many changes to how capital leases and the new “other than finance” leases are accounted for both on the lessee and lessor side.  For example, valuation, amortization, and disclosure will be considerably different requiring different types of data to be captured. Companies may need to modify their chart of accounts depending on how they want to track leases, which could then impact financial reporting and consolidation Business processes may require changes which could then impact internal controls Software applications may need to perform more advanced computations on leases Reports and KPIs may need to reflect new operating metrics Hold Onto Your Seats           Before you redo all your lease agreements and call your software vendors asking when the changes to the software will be made, remember that the rules are not finalized yet, and from appearances, will not reflect the proposals in the exposure draft.  Not only are there objections to putting the operating lease assets on anyone’s balance sheet, there are lots of objections to subjectivity and the data required for the valuation.  According to Seamus, there is huge opposition from New York bankers, the airlines, the EU, the Communist Party of China (since it impacts their exporting business), and Republicans (hearing complaints from small and large businesses). Even if everyone can agree on the proposed changes, 2013 might be the earliest that companies would need to change how they report leases. The Boards will finish their deliberations in April, May or June 2011.  As we’ve seen with other Exposure Drafts, if the changes are minor and the principles met the General Acceptance consensus criteria, the Standard could be finalized at that time.  However, if substantial changes are made, a fresh exposure draft, comment period, and review period might be involved, too. Seamus added an interesting perspective. Even if the proposed changes do pass, don’t you think our customers, such as Boeing, GE Capital, United Airlines, etc. will be clever enough to come up with a new kind of financing arrangement that complies with the new accounting? How about the large retail customers, such as Best Buy and Macerich? Don’t you think they might simply cut deals around retail locations with new contracts that prevent their leases from being capital leases? Instead of blindly adapting the software to meet the principles outlined in the final standard, our software needs to accommodate how businesses will respond to the new rules. We cannot know our customers’ responses until the rules are finalized. Oracle is aware of the potential changes and is staying abreast of the developments through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to worldwide regulatory compliance. Oracle products have been IFRS and GAAP compliant for years and we will continue to maintain those standards going forward.

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • Why does this valid Tkinter code crash when mixed with a bit of PyWin32?

    - by Erlog
    So I'm making a very small program for personal use in tkinter, and I've run into a really strange wall. I'm mixing tkinter with the pywin32 bindings because I really hate everything to do with the syntax and naming conventions of pywin32, and it feels like tkinter gets more done with far less code. The strangeness is happening in the transition between the pywin32 clipboard watching and my program's reaction to it in tkinter. My window and all its controls are being handled in tkinter. The pywin32 bindings are doing clipboard watching and clipboard access when the clipboard changes. From what I've gathered about the way the clipboard watching pieces of pywin32 work, you can make it work with anything you want as long as you provide pywin32 with the hwnd value of your window. I'm doing that part, and it works when the program first starts. It just doesn't seem to work when the clipboard changes. When the program launches, it grabs the clipboard and puts it into the search box and edit box just fine. When the clipboard is modified, the event I want to fire off is firing off...except that event that totally worked before when the program launched is now causing a weird hang instead of doing what it's supposed to do. I can print the clipboard contents to stdout all I want if the clipboard changes, but not put that same data into a tkinter widget. It only hangs like that if it starts to interact with any of my tkinter widgets after being fired off by a clipboard change notification. It feels like there's some pywin32 etiquette I've missed in adapting the clipboard-watching sample code I was using over to my tkinter-using program. Tkinter apparently doesn't like to produce stack traces or error messages, and I can't really even begin to know what to look for trying to debug it with pdb. Here's the code: #coding: utf-8 #Clipboard watching cribbed from ## {{{ http://code.activestate.com/recipes/355593/ (r1) import pdb from Tkinter import * import win32clipboard import win32api import win32gui import win32con import win32clipboard def force_unicode(object, encoding="utf-8"): if isinstance(object, basestring) and not isinstance(object, unicode): object = unicode(object, encoding) return object class Application(Frame): def __init__(self, master=None): self.master = master Frame.__init__(self, master) self.pack() self.createWidgets() self.hwnd = self.winfo_id() self.nextWnd = None self.first = True self.oldWndProc = win32gui.SetWindowLong(self.hwnd, win32con.GWL_WNDPROC, self.MyWndProc) try: self.nextWnd = win32clipboard.SetClipboardViewer(self.hwnd) except win32api.error: if win32api.GetLastError () == 0: # information that there is no other window in chain pass else: raise self.update_search_box() self.word_search() def word_search(self): #pdb.set_trace() term = self.searchbox.get() self.resultsbox.insert(END, term) def update_search_box(self): clipboardtext = "" if win32clipboard.IsClipboardFormatAvailable(win32clipboard.CF_TEXT): win32clipboard.OpenClipboard() clipboardtext = win32clipboard.GetClipboardData() win32clipboard.CloseClipboard() if clipboardtext != "": self.searchbox.delete(0,END) clipboardtext = force_unicode(clipboardtext) self.searchbox.insert(0, clipboardtext) def createWidgets(self): self.button = Button(self) self.button["text"] = "Search" self.button["command"] = self.word_search self.searchbox = Entry(self) self.resultsbox = Text(self) #Pack everything down here for "easy" layout changes later self.searchbox.pack() self.button.pack() self.resultsbox.pack() def MyWndProc (self, hWnd, msg, wParam, lParam): if msg == win32con.WM_CHANGECBCHAIN: self.OnChangeCBChain(msg, wParam, lParam) elif msg == win32con.WM_DRAWCLIPBOARD: self.OnDrawClipboard(msg, wParam, lParam) # Restore the old WndProc. Notice the use of win32api # instead of win32gui here. This is to avoid an error due to # not passing a callable object. if msg == win32con.WM_DESTROY: if self.nextWnd: win32clipboard.ChangeClipboardChain (self.hwnd, self.nextWnd) else: win32clipboard.ChangeClipboardChain (self.hwnd, 0) win32api.SetWindowLong(self.hwnd, win32con.GWL_WNDPROC, self.oldWndProc) # Pass all messages (in this case, yours may be different) on # to the original WndProc return win32gui.CallWindowProc(self.oldWndProc, hWnd, msg, wParam, lParam) def OnChangeCBChain (self, msg, wParam, lParam): if self.nextWnd == wParam: # repair the chain self.nextWnd = lParam if self.nextWnd: # pass the message to the next window in chain win32api.SendMessage (self.nextWnd, msg, wParam, lParam) def OnDrawClipboard (self, msg, wParam, lParam): if self.first: self.first = False else: #print "changed" self.word_search() #self.word_search() if self.nextWnd: # pass the message to the next window in chain win32api.SendMessage(self.nextWnd, msg, wParam, lParam) if __name__ == "__main__": root = Tk() app = Application(master=root) app.mainloop() root.destroy()

    Read the article

  • Hardware/Software inventory open source projects

    - by Dick dastardly
    Dear Stackoverflowers I would like to develop a Network Inventory application that works on any operating system. Reports on every possible resource attacehd to a network. Reports all pertinent details of hardware and software. Thats (and i hate to use the phrase) my "End Game". However I am running before i can crawl here. I have no experience of this type of development, e.g. discovering a computers hardware and software settings. I've spent almost two weeks googling and come up short! :-(. So I am turning to you to ask these questions:- My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine. Does this project exist? or do i have to develop that first? Have i got to write all this in C? I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network. Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on One problem i have is i dont have a 1000 machine newtork to play on! Is there any such resource available on theinternet? (is that a silly question?) Anywho, if you dont ask you wont find out! One aspect iam really looking forward to finding out how to travers the entire network, should i be using TCP/IP for this? Whats a good site, blog, usergorup, book for TCP/IP development? How do i go about getting through firewalls? How many questions can i ask in one go? :-) My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in. Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks. Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet? Pease help? Theres no prizes though :-( Thanks in advance I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance. I just would like to start moving forward with this as its one of the best projects i have been involved with. I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people. That sit Thanks in advance DD

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Two-way databinding of a custom templated control. Eval works, but not Bind.

    - by Jason
    I hate long code snippets and I'm sorry about this one, but it turns out that this asp.net stuff can't get much shorter and it's so specific that I haven't been able to generalize it without a full code listing. I just want simple two-way, declarative databinding to a single instance of an object. Not a list of objects of a type with a bunch of NotImplementedExceptions for Add, Delete, and Select, but just a single view-state persisted object. This is certainly something that can be done but I've struggled with an implementation for years. This newest, closest implementation was inspired by this article from 4-Guys-From-Rolla, http://msdn.microsoft.com/en-us/library/aa478964.aspx. Unfortunately, after implementing, I'm getting the following error and I don't know what I'm missing: System.InvalidOperationException: Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control. If I don't use Bind(), and only use Eval() functionality, it works. In that way, the error is especially confusing. Here's the simplified codeset that still produces the error: using System.ComponentModel; namespace System.Web.UI.WebControls.Special { public class SampleFormData { public string SampleString = "Sample String Data"; public int SampleInt = -1; } [ToolboxItem(false)] public class SampleSpecificFormDataContainer : WebControl, INamingContainer { SampleSpecificEntryForm entryForm; internal SampleSpecificEntryForm EntryForm { get { return entryForm; } } [Bindable(true), Category("Data")] public string SampleString { get { return entryForm.FormData.SampleString; } set { entryForm.FormData.SampleString = value; } } [Bindable(true), Category("Data")] public int SampleInt { get { return entryForm.FormData.SampleInt; } set { entryForm.FormData.SampleInt = value; } } internal SampleSpecificFormDataContainer(SampleSpecificEntryForm entryForm) { this.entryForm = entryForm; } } public class SampleSpecificEntryForm : WebControl, INamingContainer { #region Template private IBindableTemplate formTemplate = null; [Browsable(false), DefaultValue(null), TemplateContainer(typeof(SampleSpecificFormDataContainer), ComponentModel.BindingDirection.TwoWay), PersistenceMode(PersistenceMode.InnerProperty)] public virtual IBindableTemplate FormTemplate { get { return formTemplate; } set { formTemplate = value; } } #endregion #region Viewstate SampleFormData FormDataVS { get { return (ViewState["FormData"] as SampleFormData) ?? new SampleFormData(); } set { ViewState["FormData"] = value; SaveViewState(); } } #endregion public override ControlCollection Controls { get { EnsureChildControls(); return base.Controls; } } private SampleSpecificFormDataContainer formDataContainer = null; [Browsable(false), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public SampleSpecificFormDataContainer FormDataContainer { get { EnsureChildControls(); return formDataContainer; } } [Bindable(true), Browsable(false)] public SampleFormData FormData { get { return FormDataVS; } set { FormDataVS = value; } } protected override void CreateChildControls() { if (!this.ChildControlsCreated) { Controls.Clear(); formDataContainer = new SampleSpecificFormDataContainer(this); Controls.Add(formDataContainer); FormTemplate.InstantiateIn(formDataContainer); this.ChildControlsCreated = true; } } public override void DataBind() { CreateChildControls(); base.DataBind(); } } } With an ASP.NET page the following: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default2.aspx.cs" Inherits="EntryFormTest._Default2" EnableEventValidation="false" %> <%@ Register Assembly="EntryForm" Namespace="System.Web.UI.WebControls.Special" TagPrefix="cc1" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to ASP.NET! </h2> <cc1:SampleSpecificEntryForm ID="EntryForm1" runat="server"> <FormTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("SampleString") %>'></asp:TextBox><br /> <h3>(<%# Container.SampleString %>)</h3><br /> <asp:Button ID="Button1" runat="server" Text="Button" /> </FormTemplate> </cc1:SampleSpecificEntryForm> </asp:Content> Default2.aspx.cs using System; namespace EntryFormTest { public partial class _Default2 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { EntryForm1.DataBind(); } } } Thanks for any help!

    Read the article

  • Source code versioning with comments (organizational practice) - leave or remove?

    - by ADTC
    Before you start admonishing me with "DON'T DO IT," "BAD PRACTICE!" and "Learn to use proper source code control", please hear me out first. I am fully aware that the practice of commenting out old code and leaving it there forever is very bad and I hate such practice myself. But here's the situation I'm in. A few months ago I joined a company as software developer. I had worked in the company for few months as an intern, about a year before joining recently. Our company uses source code version control (CVS) but not properly. Here's what happened both in my internship and my current permanent position. Each time I was assigned to work on a project (legacy, about 8-10 years old). Instead of creating a CVS account and letting me check out code and check in changes, a senior colleague exported the code from CVS, zipped it up and passed it to me. While this colleague checks in all changes in bulk every few weeks, our usual practice is to do fine-grained versioning in the actual source code itself (each file increments in versions independent from the rest). Whenever a change is made to a file, old code is commented out, new code entered below it, and this whole section is marked with a version number. Finally a note about the changes is placed at the top of the file in a section called Modification History. Finally the changed files are placed in a shared folder, ready and waiting for the bulk check-in. /* * Copyright notice blah blah * Some details about file (project name, file name etc) * Modification History: * Date Version Modified By Description * 2012-10-15 1.0 Joey Initial creation * 2012-10-22 1.1 Chandler Replaced old code with new code */ code .... //v1.1 start //old code new code //v1.1 end code .... Now the problem is this. In the project I'm working on, I needed to copy some new source code files from another project (new in the sense that they didn't exist in destination project before). These files have a lot of historical commented out code and comment-based versioning including usually long or very long Modification History section. Since the files are new to this project I decided to clean them up and remove unnecessary code including historical code, and start fresh at version 1.0. (I still have to continue the practice of comment-based versioning despite hating it. And don't ask why not start at version 0.1...) I have done similar something during my internship and no one said anything. My supervisor has seen the work a few times and didn't say I shouldn't do such clean-up (if at all it was noticed). But a same-level colleague saw this and said it's not recommended as it may cause downtime in the future and increase maintenance costs. An example is when changes are made in another project on the original files and these changes need to be propagated to this project. With code files drastically different, it could cause confusion to an employee doing the propagation. It makes sense to me, and is a valid point. I couldn't find any reason to do my clean-up other than the inconvenience of a ridiculously messy code. So, long story short: Given the practice in our company, should I not do such clean-up when copying new files from project to project? Is it better to make changes on the (copy of) original code with full history in comments? Or what justification can I give for doing the clean-up? PS to mods: Hope you allow this question some time even if for any reason you determine it to be unfit in SO. I apologize in advance if anything is inappropriate including tags.

    Read the article

  • input in table > td, But yet extra bottom spacing between rows! Internet Explorer

    - by phpExe
    Im using meyer css reset. But I have problem with input in a table. There in extra space between rows: <table class="table" cellpadding="0" cellspacing="0" border="0"> <tr> <td>&nbsp;</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>10</td> </tr> <tr> <td>1</td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text" class="black"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> </tr> <tr> <td>2</td> <td><input type="text" /></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text" class="black"/></td> <td><input type="text"/></td> <td><input type="text"/></td> </tr> </table> and css: .table { border-collapse: collapse; border-spacing: 0px; } .table tr { margin-bottom:0; overflow:hidden; height:25px; width: 100%; padding:0; } .table input { width:25px; height:25px; border:1px solid #000; text-align:center; } .black { background:#000; } Why there is extra bottom spacing in internet explorer (I hate ie :(()? Thanks alot

    Read the article

  • How you remember by default functionality/class name etc of the platform

    - by piemesons
    Hello everyone, I am 8 months experienced guy, (B.tech in computer science) In my college time i used to create simple programs in c/c++/java. Simple programs like creating linked list/binary trees programs. frankly saying those college bullshit exercise.(I am from India so Engg colleges in india sucks except few like IIT's etc). In my college time apart from my college exercises i created some better programs/games like arachnoid, snake. We had 6 months internship in our college curriculum. I worked on asp.net. Basically the work was to create a website with some random functionality. After that in my job i worked on php and successfully deployed 4 projects. All having lot of functionality and i was the only team member in all the projects. Now i am learning ruby on rails as i switched to a new firm. I also have to work on android or iphone depending upon on what mobile technology i want to choose or i can work on both of the technologies. My project manager says take your time to learn things. we are not in hurry to place you in any project. Work on things by your self. take 3 4 months to learn. But i am not getting good pace. I am quite confident with php/asp etc but i dont able to grasp things in android. Although my c/c++ background is quite good, having a good logical mind. But i am not able to grasp the things in android. Even learning some basics of rails i found it wtf. Why i have make model name singular and table name plural.By default that action name and name of the file in view is same I just hate the word MAGIC mentioned more than 100 times in the book. (agile-web-development-with-rails) (I am talking about default functionality, I can over ride them that i know, so please dont debate on that) I not saying i am not getting the things. My point is remembering the default functionality is a pissing me off. Lots of classes. Lots of files . specify this thing here. That thing there. All these things (remember which class does what) require some time or i am missing something. For my point of view i am having all these problems cause previously i never used object oriented programming approach in php. (I NEVER USED, I AM NOT SAYING THET ARE NOT) How you people explain it. How you people suggest me to do. I am looking suggestions from some seniors.From seniors in my office.They says you good dude. But i dont know i am not geting confidence in the things. When they ask me anythings about the topics i cover. I give them good answers. So when i discuss this problem with them they says there is no problem just keep on working. And sorry for my poor english.

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • Is this a reasonable way to handle getters/setters in a PHP class?

    - by Mark Biek
    I'm going to try something with the format of this question and I'm very open to suggestions about a better way to handle it. I didn't want to just dump a bunch of code in the question so I've posted the code for the class on refactormycode. base-class-for-easy-class-property-handling My thought was that people can either post code snippets here or make changes on refactormycode and post links back to their refactorings. I'll make upvotes and accept an answer (assuming there's a clear "winner") based on that. At any rate, on to the class itself: I see a lot of debate about getter/setter class methods and is it better to just access simple property variables directly or should every class have explicit get/set methods defined, blah blah blah. I like the idea of having explicit methods in case you have to add more logic later. Then you don't have to modify any code that uses the class. However I hate having a million functions that look like this: public function getFirstName() { return $this->firstName; } public function setFirstName($firstName) { return $this->firstName; } Now I'm sure I'm not the first person to do this (I'm hoping that there's a better way of doing it that someone can suggest to me). Basically, the PropertyHandler class has a __call magic method. Any methods that come through __call that start with "get" or "set" are then routed to functions that set or retrieve values into an associative array. The key into the array is the name of the calling method after get or set. So, if the method coming into __call is "getFirstName", the array key is "FirstName". I liked using __call because it will automatically take care of the case where the subclass already has a "getFirstName" method defined. My impression (and I may be wrong) is that the __get & __set magic methods don't do that. So here's an example of how it would work: class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } } $props = new PropTest(); $props->setFirstName("Mark"); echo $props->getFirstName(); Notice that PropTest doesn't actually have "setFirstName" or "getFirstName" methods and neither does PropertyHandler. All that's doing is manipulating array values. The other case would be where your subclass is already extending something else. Since you can't have true multiple inheritance in PHP, you can make your subclass have a PropertyHandler instance as a private variable. You have to add one more function but then things behave in exactly the same way. class PropTest2 { private $props; public function __construct() { $this->props = new PropertyHandler(); } public function __call($method, $arguments) { return $this->props->__call($method, $arguments); } } $props2 = new PropTest2(); $props2->setFirstName('Mark'); echo $props2->getFirstName(); Notice how the subclass has a __call method that just passes everything along to the PropertyHandler __call method. Another good argument against handling getters and setters this way is that it makes it really hard to document. In fact, it's basically impossible to use any sort of document generation tool since the explicit methods to be don't documented don't exist. I've pretty much abandoned this approach for now. It was an interesting learning exercise but I think it sacrifices too much clarity.

    Read the article

  • CodePlex Daily Summary for Friday, March 30, 2012

    CodePlex Daily Summary for Friday, March 30, 2012Popular ReleasesSIPSorcery: SIPSorcery Softphone v1.0.0: The SIPSorcery softphone is a demo (note the "demo") application to prototype using .Net as a suitable runtime environment for a SIP softphone application requiring deterministic audio sampling and playback, it's not. And also to prototype placing calls via Google Voice's XMPP gateway, this works well.ScriptIDE: Release 4.4: ...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!XAML Dialect Comparer Tool: Beta 1: This is a first beta version of this tool (as shown at DevConncetions in Vegas, March 2012). Community participation and suggestions are appreciated.LINQ Extensions Library: 1.0.2.7: Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) New Align/Match extensions (1.0.2.6) Ready to use stable code with comprehensive unit tests and samples New Pivot extensions New Filter ExtensionsStartrinity.com Silverlight realtime multiple face and feature points detector: Version 1.2: *Added public methods to start and stop capturing *Added public access to captured snapshot. Applications can crop faces out of snapshot imageMonoGame - Write Once, Play Everywhere: MonoGame 2.5: The MonoGame team are pleased to announce that MonoGame v2.5 has been released. This release contains important bug fixes, implements optimisations and adds key features. MonoGame now has the capability to use OpenGLES 2.0 on Android and iOS devices, meaning it now supports custom shaders across mobile and desktop platforms. Also included in this release are native orientation animations on iOS devices and better Orientation support for Android. There have also been a lot of bug fixes since t...callisto: callisto 2.0.23: Patched Script static class and peak user count bug fix.Circuit Diagram: Circuit Diagram 2.0 Alpha 3: New in this release: Added components: Microcontroller Demultiplexer Flip & rotate components Open XML files from older versions of Circuit Diagram Text formatting for components New CDDX syntax Other fixesUmbraco CMS: Umbraco 5.1 CMS (Beta): Beta build for testing - please report issues at issues.umbraco.org (Latest uploaded: 5.1.0.123) What's new in 5.1? The full list of changes is on our http://progress.umbraco.org task tracking page. It shows items complete for 5.1, and 5.1 includes items for 5.0.1 and 5.0.2 listed there too. Here's two headline acts: Members5.1 adds support for backoffice editing of Members. We support the pairing up of our content type system in Hive with regular ASP.NET Membership providers (we ship a def...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.2.11: One Click Install from NuGet Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla Public Licence 2. 2. User interface control and associated data access layer classes have been added to aid developers integrating 51Degrees.mobi into wider projects such as content management systems or web hosting management solutions. Use the following in a web form or user control to access these new UI components. <%@ Register Assembly="FiftyOne.Foundation" Namespace="...JSON Toolkit: JSON Toolkit 3.1: slight performance improvement (5% - 10%) new JsonException classPicturethrill: Version 2.3.28.0: Straightforward image selection. New clean UI look. Super stable. Simplified user experience.SQL Monitor - managing sql server performance: SQL Monitor 4.2 alpha 16: 1. finally fixed problem with logic fault checking for temporary table name... I really mean finally ...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 Harness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration KNOWN ISSUES WITH THIS RELEASE: ON INSTALL PLEASE DON'T CHECK "Upgrade BBCode Extensions...". More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxCraig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...New ProjectsBig-Tuto DirectX Site du Zéro: Dépôt de code source pour le big tuto DirectX du site du ZéroBlob Drop: Azure Blob Drop watches a local file folder (or several) and passes on any file changes (creates, updates, deletes) to a blob container in Windows Azure Storage account. Just "dump" files into a folder (such as content for a web site) and they will get uploaded to the cloud.C++ AMP Algorithms Library: C++ AMP Algorithms Library is a library of Parallel Patterns that C++ AMP developers can freely use in their own projects.C++ AMP BLAS Library: C++ AMP BLAS Library is a library of Basic Linear Algebra Subroutines that C++ AMP developers can freely use in their own projects.C++ AMP RNG Library: C++ AMP RNG Library is a library of Random Number Generators that C++ AMP developers can freely use in their own projects.cphobby: This is a project to high performance computing on WindowsCSharp Tesseract OCR GUI UTB Spring 2012: n/aecho test project: echo test projectFacebook Connection Manager: Facebook Application written in Asp.Net. Monitors the list of contacts (for its users), revealing when they or someone they know has lost a contact (e.g. deleted their profile or linkage). Works by storing all the auth tickets, extending them, and using them to regularly poll the facebook API.Fast Excel Spreadsheet Writer: The Fast Excel Spreadsheet Writer makes it easier for developers to write 2007 / 2010 Excel spreadsheets containing large sets of raw data. It can write hundreds of thousands of records in seconds. It is developed in C# and uses the Packaging namespace and Xml writers. fieldGames: Demonstrates wp7 gps featuresfucksmzdm: fuck smzdm projectGeo.LibraryManage: Geo.LibraryManageGraph my Code: Silverlight graph control and tools to visualize .net assembly in human readable way.Iron Server: Control Your PC IUBookStore: Build an online system the supports the process of renting and purchasing books from HCMC International University's Post-graduation Centerkage: how to make a kageLCDSmartie dll to display MediaPortal status: MP.dll allows the display of MediaPortal data on an LCDSmartie driven display. Uses WifiConnect and MPExtended to gain access to the MediaPortal data. Written in C.MaxiService: Controle Integrado de Serviço e ManutençãoNetduinoBot: This project is to have fun and learn more about .Net The plan is to improve a simple netduino based robot (2 powered wheels using stepper engines and a support wheel) Next steps: Adding BT communication Adding interactive driving Adding distance measurement Adding routingNovaUmbracoDemo: This is a demo site for Nova in Umbraco business domain.Orchard Members Only: This is a simple module to protect anonymous users from accessing specified content To utilize the feature, simply add the Members Only content part to any content type to make that content type available to authenticated users only.POC's project around web and azure related development: POC's project around web and azure related developmentPractice: nonepromising: asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvc jqueryeasyui rbac asp.net mvRRPandora for RideRunner: RRPandora is a Pandora plugin for RideRunner front end. Play your pandora stations, create new ones, and love/hate your tracks all from within your car PC front end of choice. Discusion on RRPandora project can be found here: http://www.mp3car.com/rr-plugins/150505-so-have-you-guys-ever-heard-of-this-new-thing-called-pandora.htmlSharepoint & CRM Toolkit: Reusable components and tools to speed up Sharepoint and Dynamics CRM development, e.g. Site Definitions, Web Parts, Event Handlers, Workflows, Management Tools for sharepoint and Plugin, workflow, custom reports for MS CRM. Sharepoint QuotaCheck Webpart: A Sharepoint webpart wich shows a progressbar with the current percent usage from the maximum quota. Also shows the warning, max quota and the current usage in MB. Change color if the warning quota is reached.Silverlight MultiSelectComboBox: A multi-select combobox for Silverlight (hope this gets included in the toolkit at some time)Simple OWL.Api: Simple OWL.Api is a .Net OWL Api wrapper library.Taxi Please!: This is the Taxi Please! windows phone application.testtom03292012hg02: testtom03292012hg02testtom03292012tfs01: testtom03292012tfs01theBrent StartPage: A simple personal start page portalVirtualization Automation via Powershell: Scratch Project to enable Virtualization Automation via PowershellXAML Dialect Comparer Tool: This tool allows for comparison of different XAML dialects and utilized framework namespaces. Want to know if your Silverlight project will translate well to Windows 8 Metro? And whether your Metro assets can be reused in your Windows Phone app? And how about that WPF app? This helpful tool provides some interesting metrics. Note that it only compares types/classes and all their members. It makes no comparison of behavior differences between classes and members of identical names.

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • SQL CLR Assembly Error 80131051 when late binding to a registered C# COM .dll

    - by Shanubus
    I must have hit an unusual one, because I can't find any reference to this specific failing anywhere... Scenario: I have a legacy SQL function used to transform(encrypt) data. This function is called from within many stored procedures used by multiple applications. I say this, because the obvious answer of 'just call it from your code' is not really an option (or at least one I'd prefer not explore). The legacy function used sp_OA with an ActiveX dll on SQL2000 to perform its work. The new function is targeted at SQL2008 x64. I am ditching the sp_OA call in favor of CLR assembly; and am getting rid of the ActiveX dll and using a COM+ .dll (3rd party) to perform the same work. This 3rd party COM+ is required to be used based on spec given to me, so can't get rid of this piece either. Problem: After multiple attempts at getting this to work I have eliminated the following approaches 1) Create a Sql Assembly to call the local COM+ directly -- Can't do this as it requires a reference to System.EnterpriseServices. Including this requires that a whole slew of unsupported assemblies be registered which I don't want. The COM+ requires it's methods to be accessed via an Interface, so my attempts at late binding to it directly have not been successful (late binding would allow me to drop the unsupported references). 2) Create a Sql Assembly which references a C# class library that then calls the COM+. -- Same issue as #1; since the referenced dll uses System.EnterpriseServices and will be added as a dependency when referenced in the Sql Assembly, again trying to load all the unsupported libraries 3) Create a Sql Assembly which late binds to an ActiveX COM dll that calls the COM+. -- Worked in my dev environment, but can't go to x64 in production with ActiveX dll's written in VB6 (not to mention I hate backtracking anyway)... again failure... I am now onto an approach that is almost working, with of course one last hangup. I now have -a Sql Assembly that late binds to a C# COM dll, eliminating the need for including System.EnterpriseServices and eliminating the need to reference the C# COM in the SqlAssembly itself. The C# COM does reference System.EnterpriseServices to call the COM+, but since I am late binding to it from the SqlAssembly, I bypass the need for Sql to actually load them as referenced assemblies. Works in debugger.. Works on my dev box when the SqlAssembly dll is referenced in a test console app and called directly Installs to Sql2008 just fine Executing the actual UDF works, but returns no data due to a failure reporting from the late bound dll! So the SqlAssembly is instanciated just fine. It actually fails on it's late binding to the C# COM, which is working from a test console app on the same machine. It appears to be a difference in behavior based on whether called from within the SQL UDF or not. Since it is working on the same box from my console app, I am assuming it's on the SQL side. My steps to install were. --Install the COM+ dll and ensure it can be called successfully (as from with in the console app) --Register the C# COM dll (which calls the COM+) and get it to the GAC (again proofed to be working from console app) --Create my Assymetric Key CREATE ASYMMETRIC KEY SqlCryptoKey FROM EXECUTABLE FILE = 'D:\SqlEx.dll' CREATE LOGIN SqlExLogin FROM ASYMMETRIC KEY SqlExKey GRANT UNSAFE ASSEMBLY TO SqlExLogin GO --Add the assembly CREATE ASSEMBLY SqlEx FROM 'D:\SqlEx.dll' WITH PERMISSION_SET = UNSAFE; GO --Create the function CREATE FUNCTION dbo.f_SqlEx( @clearText [nvarchar](512) ) RETURNS nvarchar(512) WITH EXECUTE AS CALLER AS EXTERNAL NAME SqlEx.[SqlEx.SqlEx].Ex GO With all that done, I can now call my function SELECT dbo.f_SqlEx('test') But get this error in the event log... Retrieving the COM class factory for component with CLSID {F69D6320-5884-323F-936A-7657946604BE} failed due to the following error: 80131051. I can't really provide direct code examples, due to internal security implications; but all the code itself seems to work, I am suspecting perms or something of the like... I just find it odd that I can't find any reference to error 80131051. If someone out there believe some 'indirect' code samples will help, I will be happy to provide. Any assistance is appreciated.

    Read the article

  • Two-way databinding of a custom templated asp.net control

    - by Jason
    I hate long code snippets and I'm sorry about this one, but it turns out that this asp.net stuff can't get much shorter and it's so specific that I haven't been able to generalize it without a full code listing. I just want simple two-way, declarative, edit-only databinding to a single instance of an object. Not a list of objects of a type with a bunch of NotImplementedExceptions for Add, Delete, and Select, but just a single view-state persisted object. This is certainly something that can be done but I've struggled with an implementation for years. This newest, closest implementation was inspired by this article from 4-Guys-From-Rolla. Unfortunately, after implementing, I'm getting the following error and I don't know what I'm missing: System.InvalidOperationException: Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control. If I don't use Bind(), and only use Eval() functionality, it works. In that way, the error is especially confusing. Update: Actually, using Eval() does NOT work, but using <%# Container.SampleString %> works. However, Eval("SampleString") gives the same error. That leads me back to this article I found earlier but had discarded. Now I believe it might be related, though I haven't cracked it yet ... Here's the simplified codeset that still produces the error: using System.ComponentModel; namespace System.Web.UI.WebControls.Special { public class SampleFormData { public string SampleString = "Sample String Data"; public int SampleInt = -1; } [ToolboxItem(false)] public class SampleSpecificFormDataContainer : DataBoundControl, INamingContainer { SampleSpecificEntryForm entryForm; internal SampleSpecificEntryForm EntryForm { get { return entryForm; } } [Bindable(true), Category("Data")] public string SampleString { get { return entryForm.FormData.SampleString; } set { entryForm.FormData.SampleString = value; } } [Bindable(true), Category("Data")] public int SampleInt { get { return entryForm.FormData.SampleInt; } set { entryForm.FormData.SampleInt = value; } } internal SampleSpecificFormDataContainer(SampleSpecificEntryForm entryForm) { this.entryForm = entryForm; } } public class SampleSpecificEntryForm : WebControl, INamingContainer { #region Template private IBindableTemplate formTemplate = null; [Browsable(false), DefaultValue(null), TemplateContainer(typeof(SampleSpecificFormDataContainer), ComponentModel.BindingDirection.TwoWay), PersistenceMode(PersistenceMode.InnerProperty)] public virtual IBindableTemplate FormTemplate { get { return formTemplate; } set { formTemplate = value; } } #endregion #region Viewstate SampleFormData FormDataVS { get { return (ViewState["FormData"] as SampleFormData) ?? new SampleFormData(); } set { ViewState["FormData"] = value; SaveViewState(); } } #endregion public override ControlCollection Controls { get { EnsureChildControls(); return base.Controls; } } private SampleSpecificFormDataContainer formDataContainer = null; [Browsable(false), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public SampleSpecificFormDataContainer FormDataContainer { get { EnsureChildControls(); return formDataContainer; } } [Bindable(true), Browsable(false)] public SampleFormData FormData { get { return FormDataVS; } set { FormDataVS = value; } } protected override void CreateChildControls() { if (!this.ChildControlsCreated) { Controls.Clear(); formDataContainer = new SampleSpecificFormDataContainer(this); Controls.Add(formDataContainer); FormTemplate.InstantiateIn(formDataContainer); this.ChildControlsCreated = true; } } public override void DataBind() { CreateChildControls(); base.DataBind(); } } } With an ASP.NET page the following: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default2.aspx.cs" Inherits="EntryFormTest._Default2" EnableEventValidation="false" %> <%@ Register Assembly="EntryForm" Namespace="System.Web.UI.WebControls.Special" TagPrefix="cc1" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to ASP.NET! </h2> <cc1:SampleSpecificEntryForm ID="EntryForm1" runat="server"> <FormTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("SampleString") %>'></asp:TextBox><br /> <h3>(<%# Container.SampleString %>)</h3><br /> <asp:Button ID="Button1" runat="server" Text="Button" /> </FormTemplate> </cc1:SampleSpecificEntryForm> </asp:Content> Default2.aspx.cs using System; namespace EntryFormTest { public partial class _Default2 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { EntryForm1.DataBind(); } } } Thanks for any help!

    Read the article

  • Am I crazy? (How) should I create a jQuery content editor?

    - by Brendon Muir
    Ok, so I created a CMS mainly aimed at Primary Schools. It's getting fairly popular in New Zealand but the one thing I hate with a passion is the largely bad quality of in browser WYSIWYG editors. I've been using KTML (made by InterAKT which was purchased by Adobe a few years ago). In my opinion this editor does a lot of great things (image editing/management, thumbnailing and pretty good content editing). Unfortunately time has had its nasty way with this product and new browsers are beginning to break features and generally degrade the performance of this tool. It's also quite scary basing my livelihood on a defunct product! I've been hunting, in fact I regularly hunt around to see if anything has changed in the WYSIWYG arena. The closest thing I've seen that excites me is the WYSIHAT framework, but they've decided to ignore a pretty relevant editing paradigm which I'm going to outline below. This is the idea for my proposed editor, and I don't know of any existing products that can do this properly: Right, so the traditional model for editing let's say a Page in a CMS is to log into a 'back end' and click edit on the page. This will then load another screen with the editor in it and perhaps a few other fields. More advanced CMS's will maybe have several editing boxes that are for different portions of the page. Anyway, the big problem with this way of doing things is that the user is editing a document outside of the final context it will appear in. In the simplest terms, this means the page template. Many things can be wrong, e.g. the with of the editing area might be different to the width of the actual template area. The height is nearly always fixed because existing editors always seem to use IFRAMES for backward compatibility. And there are plenty of other beefs which I'm sure you're quite aware of if you're in this development area. Here's my editor utopia: You click 'Edit Page': The actual page (with its actual template) displays. Portions of the page have been marked as editable via a class name. You click on one of these areas (in my case it'd just be the big 'body' area in the middle of the template) and a editing bar drops down from the top of the screen with all your standard controls (bold, italic, insert image etc...). Iframes are never used, instead we rely on setting contentEditable to true on the DIV's in question. Firefox 2 and IE6 can go away, let's move on. You can edit the page knowing exactly how it will look when you save it. Because all the styles for this template are loaded, your headings will look correct, everything will be just dandy. Is this such a radical concept? Why are we still content with TinyMCE and that other editor that is too embarrassing to use because it sounds like a swear word!? Let's face the facts: I'm a JavaScript novice. I did once play around in this area using the Javascript Anthology from SitePoint as a guide. It was quite a cool learning experience, but they of course used the IFRAME to make their lives easier. I tried to go a different route and just use contentEditable and even tried to sidestep the native content editing routines (execCommand) and instead wrote my own. They kind of worked but there were always issues. Now we have jQuery, and a few libraries that abstract things like IE's lack of Range support. I'm wondering, am I crazy, or is it actually a good idea to try and build an editor around this editing paradigm using jQuery and relevant plugins to make the job easier? My actual questions: Where would you start? What plugins do you know of that would help the most? Is it worth it, or is there a magical project that already exists that I should join in on? What are the biggest hurdles to overcome in a project like this? Am I crazy? I hope this question has been posted on the right board. I figured it is a technical question as I'm wanting to know specific hurdles and pitfalls to watch out for and also if it is technically feasible with todays technology. Looking forward to hearing peoples thoughts and opinions.

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

< Previous Page | 42 43 44 45 46 47  | Next Page >