Search Results

Search found 1524 results on 61 pages for 'elegant'.

Page 56/61 | < Previous Page | 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    - by mehfuzh
    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start let’s first consider the following dummy database and entity class. public class Person {     public virtual string Name { get; set; }     public virtual int Age { get; set; } }   public interface IDataBase {     T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>(). By default, the behavior in JustMock is override , which is similar to other popular mocking tools. By override it means that the tool will consider always the latest user setup. Therefore, the first example will return the latest entity every-time and will fail in line #12: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   Mock.Arrange(() => database.Get<Person>()).Returns(() => queue.Dequeue()); Mock.Arrange(() => database.Get<Person>()).Returns(person2);   // this will fail Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode());   Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); We can solve it the following way using a Queue and that removes the item from bottom on each call: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   queue.Enqueue(person1); queue.Enqueue(person2);   Mock.Arrange(() => database.Get<Person>()).Returns(queue.Dequeue());   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); This will ensure that right entity is returned but this is not an elegant solution. So, in JustMock we introduced a  new option that lets you set up your expectations sequentially. Like: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Mock.Arrange(() => database.Get<Person>()).Returns(person1).InSequence(); Mock.Arrange(() => database.Get<Person>()).Returns(person2).InSequence();   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); The  “InSequence” modifier will tell the mocking tool to return the expected result as in the order it is specified by user. The solution though pretty simple and but neat(to me) and way too simpler than using a collection to solve this type of cases. Hope that helps P.S. The example shown in my blog is using interface don’t require a profiler  and you can even use a notepad and build it referencing Telerik.JustMock.dll, run it with GUI tools and it will work. But this feature also applies to concrete methods that includes JM profiler and can be implemented for more complex scenarios.

    Read the article

  • SQLAuthority News – 7th Anniversary of Blog – A Personal Note

    - by Pinal Dave
    Special Day Today is a very special day – seven years ago I blogged for the very first time.  Seven years ago, I didn’t know what I was doing, I didn’t know how to blog, or even what a blog was or what to write.  I was working as a DBA, and I was trying to solve a problem – at my job, there were a few issues I had to fix again and again and again.  There were days when I was rewriting the same solution over and over, and there were times when I would get very frustrated because I could not write the same elegant solution that I had written before.  I came up with a solution to this problem – posting these solutions online, where I could access them whenever I needed them.  At that point, I had no idea what a blog was, or even how the internet worked, I had no idea that a blog would be visible to others.  Can you believe it? Google it on Yahoo! After a few posts on this “blog,” there was a surprise for me – an e-mail saying that someone had left me a comment.  I was surprised, because I didn’t even know you could comment on a blog!  I logged on and read my comment.  It said: “I like your script,but there is a small bug.  If you could fix it, it will run on multiple other versions of SQL Server.”  I was like, “wow, someone figured out how to find my blog, and they figured out how to fix my script!”  I found the bug, I fixed the script, and a wrote a thank you note to the guy.  My first question for him was: how did you figure it out – not the script, but how to find my blog?  He said he found it from Yahoo Search (this was in the time before Google, believe it or not). From that day, my life changed.  I wrote a few more posts, I got a few more comments, and I started to watch my traffic.  People were reading, commenting, and giving feedback.  At the end of the day, people enjoyed what I was writing.  This was a fantastic feeling!  I never thought I would be writing for others.  Even today, I don’t feel like I am writing for others, but that I am simply posting what I am learning every day.  From that very first day, I decided that I would not change my intent or my blog’s purpose. 72 Million Views – 2600 Posts – 57000 comments – 10 books – 9 courses Today, this blog is my habit, my addiction, my baby.  Every day I try to learn something new, and that lesson gets posted on the blog.  Lately there have been days where I am traveling for a full 24 hours, but even on those days I try to learn something new, and later when I have free time, I will still post it to the blog.  Because of this habit, this blog has over 72 millions views, I have written more than 2600 posts, and there are 57,000 comments and counting.  I have also written 10 books, 9 courses, and learned so many things.  This blog has given me back so much more than I ever put it into it.  It gave me an education, a reason to learn something new every day, and a way to connect to people.  I like to think of it as a learning chain, a relay where we all pass knowledge from one to another. Never Ending Journey When I started the blog, I thought I would write for a few days and stop, but now after seven years I haven’t stopped and I have no intention of stopping!  However, change happens, and for this blog it will start today.  This blog started as a single resource for SQL Server, but now it has grown beyond, to Sharepoint, Personal Development, Developer Training, MySQL, Big Data, and lots of other things.  Truly speaking, this blog is more than just SQL Server, and that was always my intention.  I named it “SQL Authority,” not “SQL Server Authority”!  Loudly and clearly, I would like to announce that I am going to go back to my roots and start writing more about SQL, more about big data, and more about the other technology like relational databases, MySQL, Oracle, and others.  My goal is not to become a comprehensive resource for every technology, my goal is to learn something new every day – and now it can be so much more than just SQL Server.  I will learn it, and post it here for you. I have written a very long post on this anniversary, but here is the summary: Thank You.  You all have been wonderful.  Seven years is a long journey, and it makes me emotional.  I have been “with” this blog before I met my wife, before we had our daughter.  This blog is like a fourth member of the family.  Keep reading, keep commenting, keep supporting.  Thank you all. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: About Me, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • ADF version of "Modern" dialog windows

    - by Martin Deh
    It is no surprise with the popularity of the i-devices (iphone, ipad), that many of the iOS UI based LnF (look and feel) would start to inspire web designers to incorporate the same LnF into their web sites.  Take for example, a normal dialog popup.  In the iOS world, the LnF becomes a bit more elegant by add just a simple element as a "floating" close button: In this blog post, I will describe how this can be accomplished using OOTB ADF components and CSS3 style elements. There are two ways that this can be achieved.  The easiest way is to simply replace the default image, which looks like this, and adjust the af|panelWindow:close-icon-style skin selector.   Using this simple technique, you can come up with this: The CSS code to produce this effect is pretty straight forward: af|panelWindow.test::close-icon-style{    background-image: url("../popClose.gif");    line-height: 10px;    position: absolute;    right: -10px;    top: -10px;    height:38px;    width:38px;    outline:none; } You can see from the CSS, the position of the region, which holds the image, is relocated based on the position based attributes.  Also, the addition of the "outline" attribute removes the border that is visible in Chrome and IE.  The second example, is based on not having an image to produce the close button.  Like the previous sample, I will use the OOTB panelWindow.  However, this time I will use a OOTB commandButton to replace the image.  The construct of the components looks like this: The commandButton is positioned first in the hierarchy making the re-positioning easier.  The commandButton will also need a style class assigned to it (i.e. closeButton), which will allow for the positioning and the over-riding of the default skin attributes of a default button.  In addition, the closeIconVisible property is set to false, since the default icon is no longer needed.  Once this is done, the rest is in the CSS.  Here is the sample that I created that was used for an actual customer POC: The CSS code for the button: af|commandButton.closeButton, af|commandButton.closeButton af|commandButton:text-only{     line-height: 10px;     position: absolute;     right: -10px;     top: -10px;     -webkit-border-radius: 70px;     -moz-border-radius: 70px;     -ms-border-radius: 70px;     border-radius: 70px;     background-image:none;     border:#828c95 1px solid;     background-color:black;     font-weight: bold;     text-align: center;     text-decoration: none;     color:white;     height:30px;     width:30px;     outline:none; } The CSS uses the border radius to create the round effect on the button (in IE 8, since border-radius is not supported, this will only work with some added code). Also, I add the box-shadow attribute to the panelWindow style class to give it a nice shadowing effect.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • [C#] Convert string to double with 2 digit after decimal separator

    - by st.stoqnov
    All began with these simple lines of code: string s = "16.9"; double d = Convert.ToDouble(s); d*=100; The result should be 1690.0, but it's not. d is equal to 1689.9999999999998. All I want to do is to round a double to value with 2 digit after decimal separator. Here is my function. private double RoundFloat(double Value) { float sign = (Value < 0) ? -0.01f : 0.01f; if (Math.Abs(Value) < 0.00001) Value = 0; string SVal = Value.ToString(); string DecimalSeparator = System.Globalization.CultureInfo.CurrentCulture.NumberFormat.CurrencyDecimalSeparator; int i = SVal.IndexOf(DecimalSeparator); if (i > 0) { int SRnd; try { // ????? ??????? ????? ???? ?????????? ?????????? SRnd = Convert.ToInt32(SVal.Substring(i + 3, 1)); } catch { SRnd = 0; } if (SVal.Length > i + 3) SVal = SVal.Substring(0, i + 3); //SVal += "00001"; try { double result = (SRnd >= 5) ? Convert.ToDouble(SVal) + sign : Convert.ToDouble(SVal); //result = Math.Round(result, 2); return result; } catch { return 0; } } else { return Value; } But again the same problem, converting from string to double is not working as I want. A workaround to this problem is to concatenate "00001" to the string and then use the Math.Round function (commented in the example above). This double value multiplied to 100 (as integer) is send to a device (cash register) and this values must be correct. I am using VS2005 + .NET CF 2.0 Is there another more "elegant" solution, I am not happy with this one.

    Read the article

  • Convert IEnumerable to EntitySet

    - by Gregorius
    Hey all, Hoping somebody can shed some light, and perhaps a possible solution to this issue I'm having... I have used LINQ to SQL to pull some data from a database into local entities. They are products from a shopping cart system. A product can contain a collection of KitGroups (which are stored in an EntitySet (System.Data.Linq.EntitySet). KitGroups contain collections of KitItems, and KitItems can contain Nested Products (which link back up to the original Product type - so its recursive). From these entities I'm building XML using LINQ to XML - all good here - my XML looks beautiful, calling a "GenerateProductElement" function, which calls itself recursively to generate the nested products. Wonderful stuff. However, here's where i'm stuck.. i'm now trying to deserialize that XML back to the original objects (all autogenerated by Linq to SQL)... and herein lies the problem. Linq tO Sql expects my collections to be EntitySet collections, however Linq to Xml (which i'm tyring to use to deserailise) is returning IEnumerable. I've experimented with a few ways of casting between the 2, but nothing seems to work... I'm starting to think that I should just deserialise manually (with some funky loops and conditionals to determine which KitGroup KitItems belong to, etc)... however its really quite tricky and that code is likely to be quite ugly, so I'd love to find a more elegant solution to this problem. Any suggestions? Here's a code snippet: private Product GenerateProductFromXML(XDocument inDoc) { var prod = from p in inDoc.Descendants("Product") select new Product { ProductID = (int)p.Attribute("ID"), ProductGUID = (Guid)p.Attribute("GUID"), Name = (string)p.Element("Name"), Summary = (string)p.Element("Summary"), Description = (string)p.Element("Description"), SEName = (string)p.Element("SEName"), SETitle = (string)p.Element("SETitle"), XmlPackage = (string)p.Element("XmlPackage"), IsAKit = (byte)(int)p.Element("IsAKit"), ExtensionData = (string)p.Element("ExtensionData"), }; //TODO: UUGGGGGGG Converting b/w IEnumerable & EntitySet var kitGroups = (from kg in inDoc.Descendants("KitGroups").Elements("KitGroup") select new KitGroup { KitGroupID = (int) kg.Attribute("ID"), KitGroupGUID = (Guid) kg.Attribute("GUID"), Name = (string) kg.Element("Name"), KitItems = // THIS IS WHERE IT FAILS - "Cannot convert source type IEnumerable to target type EntitySet..." (from ki in kg.Descendants("KitItems").Elements("KitItem") select new KitItem { KitItemID = (int) ki.Attribute("ID"), KitItemGUID = (Guid) ki.Attribute("GUID") }); }); Product ImportedProduct = prod.First(); ImportedProduct.KitGroups = new EntitySet<KitGroup>(); ImportedProduct.KitGroups.AddRange(kitGroups); return ImportedProduct; }

    Read the article

  • ASP.NET DataList - defining "columns/rows" when repeating horizontal and using flow layout

    - by Ian Robinson
    Here is my DataList: <asp:DataList id="DataList" Visible="false" RepeatDirection="Horizontal" Width="100%" HorizontalAlign="Justify" RepeatLayout="Flow" runat="server"> [Contents Removed] </asp:DataList> This generates markup that has each item wrapped in a span. From there, I'd like to break each of these spans out into rows of three columns. Ideally I would like something like this: <div> <span>Item 1</span> <span>Item 2</span> <span>Item 3</span> </div> <div> <span>Item 4</span> <span>Item 5</span> <span>Item 6</span> </div> [etc] The closest I can get to this is to set RepeatColumns to "3" and then a <br> is inserted after every three items in the DataList. <span>Item 1</span> <span>Item 2</span> <span>Item 3</span> <br> <span>Item 4</span> <span>Item 5</span> <span>Item 6</span> <br> This gets me kind of close, but really doesn't do the trick - I still can't control the layout the way I'd like to be able to. Can anyone suggest a way to make this better? If I could implement the above example - that would be perfect, however I'd accept a less elegant solution as well - as long as its more flexible than <br> (such as inserting a <span class="clear"></span> instead of <br>).

    Read the article

  • Have Button re-appear immediately after clicking button in ListView row

    - by Soeren
    I have 4 buttons on a page. Each button opens a modal window and let’s the user input data in a form. When the user hits the save button in the modal, a ListView appears on the page with the submitted data. The button the user clicked to open the modal window is set to visible=false, so it’s gone when the row is added to the ListView. Now there are 3 buttons and the same goes for those; when the user hits a button, a modal appears, and when the modal form is submitted, the button disappears and a row is added to the ListView. In the ListView row, there is a delete button. When this button is clicked, the row is deleted and the button that was initially clicked to add this row (and open the modal), SHOULD reappear, but it doesn’t. The row disappears, but I have to refresh the page before the button comes back. There is a ScriptManager on the masterpage, so I guess this is an AJAX partial refresh issue. I tried adding different events, but I can’t find the one that fires at the right time. I use an ObjectDataSource to fill the ListView, and the data comes from a database, wrapped in a business object. This code loads a business object in a List< and checks if the user inserted an item of a specific type. If he did, the button he used to open the modal is hidden. This works fine (maybe not the most elegant) _goals = GoalManager.GetGoalsByUser(UserID); if (_goals != null) { foreach (Goal _goalinlist in _goals) { if (_goalinlist.GoalType == 1) { Button1.Visible = false; goalid1 = true; } if (_goalinlist.GoalType == 2) { Button2.Visible = false; goalid2 = true; } if (_goalinlist.GoalType == 3) { Button3.Visible = false; goalid3 = true; } if (_goalinlist.GoalType == 4) { Button4.Visible = false; goalid4 = true; } } } As you can see, I tried setting a boolean, and then check it when the page is re-loaded. But the problem (I guess) is that the whole page isn't refreshed when the delete button is clicked in the ListView. This is the delete button in the ListView: <asp:ImageButton ID="ImageButton2" runat="server" CommandName="Delete" CausesValidation="false" ToolTip="Delete" CommandArgument='<%#Eval("GoalID")%>' ImageUrl="delete.gif" OnClientClick="return confirm('Delete this post?');" CssClass="button"/> I guess the question is, how do I make the button re-appear right after the ListView button is clicked?

    Read the article

  • How to run a powershell script within a DOS batch file

    - by Don Vince
    How do I have a powershell script embedded within the same file as a DOS batch script? I know this kind of thing is possible in other scenarios: Embedding SQL in a DOS batch script using sqlcmd and a clever arrangements of goto's and comments at the beginning of the file In a *nix environment having a the name of the program you wish to run the script with on the first line of the script commented out e.g. #!/usr/local/bin/python There may not be a way to do this - in which case I will have to call the separate powershell script from the launching DOS script. One possible solution I've considered is to echo out the powershell script, and then run it. A good reason to not do this is that part of the reason to attempt this is to be using the advantages of the powershell environment without the pain of, for example, DOS escape characters I have some unusual constraints and would like to find an elegant solution. I suspect this question may be baiting responses of the variety: "why don't you try and solve this different problem instead." Suffice to say these are my constraints, sorry about that. Any ideas? Is there a suitable combination of clever comments and escape characters that will enable me to achieve this? Some thoughts on how to achieve this: A carat ^ at the end of a line in DOS is a continuation - like an underscore in VB An ampersand & in DOS typically is used to separate commands echo Hello & echo World results in 2 echos on separate lines %0 will give you the script that's currently running So something like this (if I could make it work) would be good: # & call powershell -psconsolefile %0 # & goto :EOF /* From here on in we're running nice juicy powershell code */ Write-Output "Hello World" Except... It doesn't work... because the extension of the file isn't as per powershell's liking: Windows PowerShell console file "insideout.bat" extension is not psc1. Windows PowerShell console file extension must be psc1. DOS isn't really altogether happy with the situation either - although it does stumble on '#' is not recognized as an internal or external command, operable program or batch file.

    Read the article

  • Opinions on sensor / reading / alert database design

    - by Mark
    I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database. In this system, an alert is an entity that must be acknowledged, acted upon, etc. Initially I related readings to alerts like this (very cut down) : - [Location] LocationId [Sensor] SensorId LocationId UpperLimitValue LowerLimitValue [SensorReading] SensorReadingId Value Status Timestamp [SensorAlert] SensorAlertId [SensorAlertReading] SensorAlertId SensorReadingId The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not. The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?). I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this: [Location] LocationId [Sensor] SensorId LocationId [SensorReading] SensorReadingId SensorId Value Status Timestamp [SensorAlert] SensorAlertId SensorId Timestamp [SensorAlertEnd] SensorAlertId Timestamp Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do. Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary. Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable? Is there perhaps another way of doing this that I may be missing? Thanks. EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : - [Location] LocationId [Sensor] SensorId LocationId [SensorState] SensorStateId SensorId Timestamp Status IsInAlert [SensorReading] SensorReadingId SensorStateId Value Timestamp There must be an elegant solution to this!

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • Using Linq to filter a ComboBox.DataSource ?

    - by Pesche Helfer
    Hi board, in another topic, I've stumbled over this very elegant solution by Darin Dimitrov to filter the DataSource of one ComboBox with the selection of another ComboBox: how to filter combobox in combobox using c# combo2.DataSource = ((IEnumerable<string>)c.DataSource) .Where(x => x == (string)combo1.SelectedValue); I would like to do a similar thing, but intead of filtering by a second combobox, I would like to filter by the text of a TextBox. (Basically, instead of choosing from a second ComboBox, the user simply enters his filter in to a TextBox). However, it turned out to be not as straight forward as I had hoped it would be. I tried stuff as the following, but failed miserably: cbWohndresse.DataSource = ((IEnumerable<DataSet>)ds) .Where(x => x.Tables["Adresse"].Select("AdrLabel LIKE '%TEST%'")); cbWohndresse.DisplayMember = "Adresse.AdrLabel"; cbWohndresse.ValueMember = "Adresse.adress_id"; ds is the DataSet which I would like to use as filtered DataSource. "Adresse" is one DataTable in this DataSet. It contains a DataColumn "AdrLabel". Now I would like to display only those "AdrLabel", which contain the string from the user input. (Currently, %TEST% replaces the textbox.text.) The above code fails because the lambda expression does not return Bool. But I am sure, there are also other problems (which type should I use for IEnumerable? Now it's DataSet, but Darin used String. But how could I convert a DataSet to a string? Yes, I am as much newbyish as it gets, my experience is "void", and publicly so. So please forgive me my rather stupid questions. Your help is greatly appreciated, because I can't solve this on my own (tried hard already). Thank you very much! Pesche P.S. I am only using Linq to achieve an uncomplicated filter for the ComboBox (avoiding a View). The rest is not based on Linq, but on oldstyle Ado.NET (ds is filled by an SqlDataAdapter), if that's of any importance.

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • Composit widget - what is the preffered way?

    - by Aleksander Gralak
    I want to build reusable widget. It should be ordinal composite of standard elements. What is the best approach. Below is the code sample which I use currently, however there might be something more elegant. Furthermore the sample below runs perfectly in runtime, but visual editor in Eclipse is throwing exceptions (which is not a problem for me at this time). Is there any recommended way of creating composites? Should I use fragment? public class MyComposite extends LinearLayout { private ImageView m_a1; private ImageView m_a2; private ImageView m_w1; private ImageView m_w2; private ImageView m_w3; private ImageView m_w4; public CallBackSlider(final Context context) { this(context, null); } public CallBackSlider(final Context context, final AttributeSet attrs) { super(context, attrs); LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); inflater.inflate(R.layout.my_composite, this, true); setupViewItems(); } private void setupViewItems() { m_a1 = (ImageView) findViewById(R.id.A1Img); m_w1 = (ImageView) findViewById(R.id.Wave1Img); m_w2 = (ImageView) findViewById(R.id.Wave2Img); m_w3 = (ImageView) findViewById(R.id.Wave3Img); m_w4 = (ImageView) findViewById(R.id.Wave4Img); m_a2 = (ImageView) findViewById(R.id.A2Img); resetView(); } private void resetView() { m_w1.setAlpha(0); m_w2.setAlpha(0); m_w3.setAlpha(0); m_w4.setAlpha(0); } } Layout xml: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/MyComposite" ... > <ImageView android:id="@+id/A1Img" android:src="@drawable/a1" ... /> <ImageView android:id="@+id/Wave1Img" ... android:src="@drawable/wave1" /> <ImageView android:id="@+id/Wave2Img" ... android:src="@drawable/wave2" /> <ImageView android:id="@+id/Wave3Img" ... android:src="@drawable/wave3" /> <ImageView android:id="@+id/Wave4Img" ... android:src="@drawable/wave4" /> <ImageView android:id="@+id/EarImg" ... android:src="@drawable/a2" /> </LinearLayout> Then you can use it in other layouts like this: ... <your.package.MyComposite android:id="@+id/mc1" ... /> ... And use from java code as well as instance of MyComposite class.

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • Approaches to create a nested tree structure of NSDictionaries?

    - by d11wtq
    I'm parsing some input which produces a tree structure containing NSDictionary instances on the branches and NSString instance at the nodes. After parsing, the whole structure should be immutable. I feel like I'm jumping through hoops to create the structure and then make sure it's immutable when it's returned from my method. We can probably all relate to the input I'm parsing, since it's a query string from a URL. In a string like this: a=foo&b=bar&a=zip We expect a structure like this: NSDictionary { "a" => NSDictionary { 0 => "foo", 1 => "zip" }, "b" => "bar" } I'm keeping it just two-dimensional in this example for brevity, though in the real-world we sometimes see var[key1][key2]=value&var[key1][key3]=value2 type structures. The code hasn't evolved that far just yet. Currently I do this: - (NSDictionary *)parseQuery:(NSString *)queryString { NSMutableDictionary *params = [NSMutableDictionary dictionary]; NSArray *pairs = [queryString componentsSeparatedByString:@"&"]; for (NSString *pair in pairs) { NSRange eqRange = [pair rangeOfString:@"="]; NSString *key; id value; // If the parameter is a key without a specified value if (eqRange.location == NSNotFound) { key = [pair stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; value = @""; } else { // Else determine both key and value key = [[pair substringToIndex:eqRange.location] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; if ([pair length] > eqRange.location + 1) { value = [[pair substringFromIndex:eqRange.location + 1] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; } else { value = @""; } } // Parameter already exists, it must be a dictionary if (nil != [params objectForKey:key]) { id existingValue = [params objectForKey:key]; if (![existingValue isKindOfClass:[NSDictionary class]]) { value = [NSDictionary dictionaryWithObjectsAndKeys:existingValue, [NSNumber numberWithInt:0], value, [NSNumber numberWithInt:1], nil]; } else { // FIXME: There must be a more elegant way to build a nested dictionary where the end result is immutable? NSMutableDictionary *newValue = [NSMutableDictionary dictionaryWithDictionary:existingValue]; [newValue setObject:value forKey:[NSNumber numberWithInt:[newValue count]]]; value = [NSDictionary dictionaryWithDictionary:newValue]; } } [params setObject:value forKey:key]; } return [NSDictionary dictionaryWithDictionary:params]; } If you look at the bit where I've added FIXME it feels awfully clumsy, pulling out the existing dictionary, creating an immutable version of it, adding the new value, then creating an immutable dictionary from that to set back in place. Expensive and unnecessary? I'm not sure if there are any Cocoa-specific design patterns I can follow here?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61  | Next Page >