Search Results

Search found 17487 results on 700 pages for 'static members'.

Page 437/700 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • SmtpClient.SendAsync code review

    - by Lieven Cardoen
    I don't know if this is the way it should be done: { ... var client = new SmtpClient {Host = _smtpServer}; client.SendCompleted += SendCompletedCallback; var userState = mailMessage; client.SendAsync(mailMessage, userState); ... } private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. var mailMessage= (MailMessage)e.UserState; if (e.Cancelled) { Log.Info(String.Format("[{0}] Send canceled.", mailMessage)); } if (e.Error != null) { Log.Error(String.Format("[{0}] {1}", mailMessage, e.Error)); } else { Log.Info("Message sent."); } mailMessage.Dispose(); } Disposing the mailMessage after the client.SendAsync(...) throws an error. So I need to dispose it in the Callback handler.

    Read the article

  • How can I extend DynamicQuery.cs to implement a .Single method?

    - by Yoenhofen
    I need to write some dynamic queries for a project I'm working on. I'm finding out that a significant amount of time is being spent by my program on the Count and First methods, so I started to change to .Single, only to find out that there is no such method. The code below was my first attempt at creating one (mostly copied from the Where method), but it's not working. Help? public static object Single(this IQueryable source, string predicate, params object[] values) { if (source == null) throw new ArgumentNullException("source"); if (predicate == null) throw new ArgumentNullException("predicate"); LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, typeof(bool), predicate, values); return source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Single", new Type[] { source.ElementType }, source.Expression, Expression.Quote(lambda))); }

    Read the article

  • Getting ANT to scp only new/changed files

    - by Artem
    I would like to optimize my scp deployment which currently copies all files to only copy files that have changed since the last build. I believe it should be possible with the current setup somehow, but I don't know how to do this. I have the following: Project/src/blah/blah/ <---- files I am editing (mostly PHP in this case, some static assets) Project/build <------- I have a local build step that I use to copy the files to here I have an scp task right now that copies all of Project/build out to a remote server when I need it. Is it possible to somehow take advantage of this extra "build" directory to accomplish what I want -- meaning I only want to upload the "diff" between src/** and build/**. Is it possible to somehow retrieve this as a fileset in ANT and then scp that? I do realize that what it means is that if I somehow delete/mess around with files on the server in between, the ANT script would not notice, but for me this is okay.

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Linq-to-SQL: Ignore null parameters from WHERE clause

    - by Peter Bridger
    The query below should return records that either have a matching Id supplied in ownerGroupIds or that match ownerUserId. However is ownerUserId is null, I want this part of the query to be ignored. public static int NumberUnderReview(int? ownerUserId, List<int> ownerGroupIds) { return ( from c in db.Contacts where c.Active == true && c.LastReviewedOn <= DateTime.Now.AddDays(-365) && ( // Owned by user !ownerUserId.HasValue || c.OwnerUserId.Value == ownerUserId.Value ) && ( // Owned by group ownerGroupIds.Count == 0 || ownerGroupIds.Contains( c.OwnerGroupId.Value ) ) select c ).Count(); } However when a null is passed in for ownerUserId then I get the following error: Nullable object must have a value. I get a tingling I may have to use a lambda expression in this instance?

    Read the article

  • how to parse a Date string to java.Date

    - by hguser
    Hi: I have a date string and I wang to parse it to normal date use the java Date API,the following is my code: public static void main(String[] args) { String date="2010-10-02T12:23:23Z"; String pattern="yyyy-MM-ddThh:mm:ssZ"; SimpleDateFormat sdf=new SimpleDateFormat(pattern); try { Date d=sdf.parse(date); System.out.println(d.getYear()); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } } However I got a exception:java.lang.IllegalArgumentException: Illegal pattern character 'T' So I wonder if i have to split the string and parse it manually? BTW, I have tried to add a single quote character on either side of the T: String pattern="yyyy-MM-dd'T'hh:mm:ssZ"; It also does not work.

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Unable to use XSSF with Excel 2007

    - by Tarun
    Hello All, I am having tough time getting to read data from excel 2007. I am using XSSF to read data from a specific cell of excel but keep getting error - Exception in thread "main" java.lang.NoSuchMethodError: org.apache.xmlbeans.XmlOptions.setSaveAggressiveNamespaces()Lorg/apache/xmlbeans/XmlOptions; at org.apache.poi.POIXMLDocumentPart.(POIXMLDocumentPart.java:46) This is my piece of code - public static void main(String [] args) throws IOException { InputStream ins = new FileInputStream("C:\\Users\\Tarun3Kumar\\Desktop\\test.xlsx"); XSSFWorkbook xwb = new XSSFWorkbook(ins); XSSFSheet sheet = xwb.getSheetAt(0); Row row = sheet.getRow(1); Cell cell = row.getCell(0); System.out.println(cell.getStringCellValue()); System.out.println("a"); } I have following jars added to build path - poi-3.6 poi-ooxml-3.6 poi-ooxml-schemas-3.6 x-bean.jar I could only figure out that "setSaveAggressiveNamespaces" has replaced "setSaveAggresiveNamespaces"....

    Read the article

  • C#: Fill DataGridView From Anonymous Linq Query

    - by mdvaldosta
    // From my form BindingSource bs = new BindingSource(); private void fillStudentGrid() { bs.DataSource = Admin.GetStudents(); dgViewStudents.DataSource = bs; } // From the Admin class public static List<Student> GetStudents() { DojoDBDataContext conn = new DojoDBDataContext(); var query = (from s in conn.Students select new Student { ID = s.ID, FirstName = s.FirstName, LastName = s.LastName, Belt = s.Belt }).ToList(); return query; } I'm trying to fill a datagridview control in Winforms, and I only want a few of the values. The code compiles, but throws a runtime error: Explicit construction of entity type 'DojoManagement.Student' in query is not allowed. Is there a way to get it working in this manner?

    Read the article

  • C# - Pass by value & Pass by Reference

    - by Lijo
    Hi Team, Could you please explain the following behavior of C# Class. I expect the classResult as "Class Lijo"; but actual value is “Changed”. We’re making a copy of the reference. Though the copy is pointing to the same address, the method receiving the argument cannot change original. Still why the value gets changed ? public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Person p = new Person(); p.Name = "Class Lijo"; Utilityclass.TestMethod(p); string classResult = p.Name; Response.Write(classResult); } } public class Utilityclass { public static void TestMethod(Person k) { k.Name = "Changed"; } } public class Person { private string name; public string Name { get { return name; } set { name = value; } } } Thanks Lijo

    Read the article

  • MVC2 Binding isn't working for Html.DropDownListFor<>

    - by devlife
    I'm trying to use the Html.DropDownListFor< HtmlHelper and am having a little trouble binding on post. The HTML renders properly but I never get a "selected" value when submitting. <%= Html.DropDownListFor( m => m.TimeZones, Model.TimeZones, new { @class = "SecureDropDown", name = "SelectedTimeZone" } ) %> [Bind(Exclude = "TimeZones")] public class SettingsViewModel : ProfileBaseModel { public IEnumerable TimeZones { get; set; } public string TimeZone { get; set; } public SettingsViewModel() { TimeZones = GetTimeZones(); TimeZone = string.Empty; } private static IEnumerable GetTimeZones() { var timeZones = TimeZoneInfo.GetSystemTimeZones().ToList(); return timeZones.Select( t = new SelectListItem { Text = t.DisplayName, Value = t.Id } ); } } I've tried a few different things and am sure I am doing something stupid... just not sure what it is :)

    Read the article

  • WPF - List View Row Index and Validation

    - by abhishek
    Hi, I have a ListView with TextBoxes in second column. I want to validate that my text box does not contain a number if the third column(data_type) is "Text". I am unable to do the validation. I tried a few approaches. In one approach I try to handle the MouseDown event and am trying to get the Row number so that I can get the data_type value of that row. I want to us this value in the Validate method. I have been struggling for a week now. Would appreciate if anybody could help. <ControlTemplate x:Key="validationTemplate"> <DockPanel> <TextBlock Foreground="Red" FontSize="20">!</TextBlock> <AdornedElementPlaceholder/> </DockPanel> </ControlTemplate> <Style x:Key="textBoxInError" TargetType="{x:Type TextBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={x:Static RelativeSource.Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> <DataTemplate x:Key="textTemplate"> <TextBox HorizontalAlignment= "Stretch" IsEnabled="{Binding XPath=./@isenabled}" Validation.ErrorTemplate="{StaticResource validationTemplate}" Style="{StaticResource textBoxInError}"> <TextBox.Text> <Binding XPath="./@value" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <local:TextBoxMinMaxValidation> <local:TextBoxMinMaxValidation.DataType> <local:DataTypeCheck Datatype="{Binding Source={StaticResource dataProvider}, XPath='/[@id=CustomerServiceQueueName]'}"/> </local:TextBoxMinMaxValidation.DataType> <local:TextBoxMinMaxValidation.ValidRange> <local:Int32RangeChecker Minimum="{Binding Source={StaticResource dataProvider}, XPath=./@min}" Maximum="{Binding Source={StaticResource dataProvider}, XPath=./@max}"/> </local:TextBoxMinMaxValidation.ValidRange> </local:TextBoxMinMaxValidation> </Binding.ValidationRules> </Binding > </TextBox.Text> </TextBox> </DataTemplate> <DataTemplate x:Key="dropDownTemplate"> <ComboBox Name="cmbBox" HorizontalAlignment="Stretch" SelectedIndex="{Binding XPath=./@value}" ItemsSource="{Binding XPath=.//OPTION/@value}" IsEnabled="{Binding XPath=./@isenabled}" /> </DataTemplate> <DataTemplate x:Key="booldropDownTemplate"> <ComboBox Name="cmbBox" HorizontalAlignment="Stretch" SelectedIndex="{Binding XPath=./@value, Converter={StaticResource boolconvert}}"> <ComboBoxItem>True</ComboBoxItem> <ComboBoxItem>False</ComboBoxItem> </ComboBox> </DataTemplate> <local:ControlTemplateSelector x:Key="myControlTemplateSelector"/> <Style x:Key="StretchedContainerStyle" TargetType="{x:Type ListViewItem}"> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <Setter Property="Template" Value="{DynamicResource ListBoxItemControlTemplate1}"/> </Style> <ControlTemplate x:Key="ListBoxItemControlTemplate1" TargetType="{x:Type ListBoxItem}"> <Border SnapsToDevicePixels="true" x:Name="Bd" Background="{TemplateBinding Background}" BorderBrush="{DynamicResource {x:Static SystemColors.ActiveBorderBrushKey}}" Padding="{TemplateBinding Padding}" BorderThickness="0,0.5,0,0.5"> <GridViewRowPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> </Border> </ControlTemplate> <Style x:Key="CustomHeaderStyle" TargetType="{x:Type GridViewColumnHeader}"> <Setter Property="Background" Value="LightGray" /> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="FontFamily" Value="Arial"/> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="Padding" Value="2,0,2,0"/> </Style> </UserControl.Resources> <Grid x:Name="GridViewControl" Height="Auto"> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="34"/> </Grid.RowDefinitions> <ListView x:Name="ListViewControl" Grid.Row="0" ItemContainerStyle="{DynamicResource StretchedContainerStyle}" ItemTemplateSelector="{DynamicResource myControlTemplateSelector}" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource dataProvider}, XPath=//CONFIGURATION}"> <ListView.View > <GridView > <GridViewColumn Header="ID" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@id}"/> <GridViewColumn Header="VALUE" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" CellTemplateSelector="{DynamicResource myControlTemplateSelector}" /> <GridViewColumn Header="DATATYPE" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@data_type}"/> <GridViewColumn Header="DESCRIPTION" HeaderContainerStyle="{StaticResource CustomHeaderStyle}" DisplayMemberBinding="{Binding XPath=./@description}" Width="{Binding ElementName=ListViewControl, Path=ActualWidth}"/> </GridView> </ListView.View> </ListView> <StackPanel Grid.Row="1"> <Button Grid.Row="1" HorizontalAlignment="Stretch" Height="34" HorizontalContentAlignment="Stretch" > <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Center" Orientation="Horizontal" FlowDirection="RightToLeft" Height="30"> <Button Grid.Row="1" Content ="Apply" Padding="0,0,0,0 " Margin="6,2,0,2" Name="btn_Apply" HorizontalAlignment="Right" VerticalContentAlignment="Center" HorizontalContentAlignment="Center" Width="132" IsTabStop="True" Click="btn_ApplyClick" Height="24" /> </StackPanel > </Button> </StackPanel > </Grid>

    Read the article

  • how to instantiate template of template

    - by aaa
    hello I have template that looks like this: 100 template<size_t A0, size_t A1, size_t A2, size_t A3> 101 struct mask { 103 template<size_t B0, size_t B1, size_t B2, size_t B3> 104 struct compare { 105 static const bool value = (A0 == B0 && A1 == B1 && A2 == B2 && A3 == B3); 106 }; 107 }; ... 120 const typename boost::enable_if_c< 121 mask<a,b,c,d>::compare<2,3,0,1>::value || ...>::type I am trying to instantiate compare structure. How do I do get value in line 121?

    Read the article

  • SendMessage vs. WndProc

    - by Poma
    I'm trying to extend TextBox control to add watermarking functionality. The example I've found on CodeProject is using imported SendMessage function. [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)] static extern IntPtr SendMessage(IntPtr hWnd, uint Msg, uint wParam, [MarshalAs(UnmanagedType.LPWStr)] string lParam); void SetWatermark() { SendMessage(this.Handle, 0x1501, 0, "Sample"); } I'm wondering why not use protected WndProc instead void SetWatermark() { var m =new Message() { HWnd = this.Handle, Msg = 0x1501, WParam = (IntPtr)0, LParam = Marshal.StringToHGlobalUni("Sample") }; WndProc(ref m); } Both seem to work fine. Almost all examples I've seen on internet use SendMessagefunction. Why is that? Isn't WndProc function designed to replace SendMessage? P.S. I don't know right to convert string to IntPtr and found that Marshal.StringToHGlobalUni works ok. Is it right function to do this?

    Read the article

  • "not well-formed" warning when loading client-side JSON in Firefox via jQuery.ajax

    - by Zhami
    I am using jQuery's ajax method to acquire a static JSON file. The data is loaded from the local file system, hence there is no server, so I can't change the mime type. This works fin in Safari, but Firefox (3.6.3) reports the file to be "not well-formed". I am aware of, and have reviewed, a similar post here on Stack Overflow: http://stackoverflow.com/questions/677902/not-well-formed-error-in-firefox-when-loading-json-file-with-xmlhttprequest I believe my JSON is well-formed: { "_": ["appl", "goog", "yhoo", "vz", "t"] } My ajax call is straightforward: $.ajax({ url: 'data/tickers.json', dataType: 'json', async: true, data: null, success: function(data, textStatus, request) { callback(data); } }); If I wrap the JSON with a document tag: <document>JSON data</document> as was mentioned in the above referenced posted question, the ajax call fails with a parserror. So: is there a way to avoid the Firefox warning when reading in client-side JSON files?

    Read the article

  • Conversion of bytes into a type without changing the application (ie storing the conversion method i

    - by geoaxis
    Is there a way to store a conversion strategy (for converting some bytes) into a database and then execute it on the run time. If one were to store a complete java file, you would need to compile it, store the class and some how inject into the already running system. I am not sure how this would be possible. But using some kind of dynamic language on JVM would be nice. I see an example of execution of groovy from within spring context here http://www.devx.com/tips/Tip/42789 but this is still static in nature as application context contains the reference to the implementation and cannot be changed by database. Perhaps with JavaConfig of context it is possible. I am exploring options now, specifically with Spring 3.0. Your suggestions in any direction would be welcome.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Fluent NHibernate and PostgreSQL, SchemaMetadataUpdater.QuoteTableAndColumns - System.NotSupportedEx

    - by Vyacheslav
    Hello! I'm using fluentnhibernate with PostgreSQL. Fluentnhibernate is last version. PosrgreSQL version is 8.4. My code for create ISessionFactory: public static ISessionFactory CreateSessionFactory() { string connectionString = ConfigurationManager.ConnectionStrings["PostgreConnectionString"].ConnectionString; IPersistenceConfigurer config = PostgreSQLConfiguration.PostgreSQL82.ConnectionString(connectionString); FluentConfiguration configuration = Fluently .Configure() .Database(config) .Mappings(m => m.FluentMappings.Add(typeof(ResourceMap)) .Add(typeof(TaskMap)) .Add(typeof(PluginMap))); var nhibConfig = configuration.BuildConfiguration(); SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); return configuration.BuildSessionFactory(); } When I'm execute code at line SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); throw error: System.NotSupportedException: Specified method is not supported. Help me, please! I'm very need for solution. Best regards

    Read the article

  • Transaction timeout expired while using Linq2Sql DataContext.SubmitChanges()

    - by user68923
    Hi guys, please help me resolve this problem: There is an ambient MSMQ transaction. I'm trying to use new transaction for logging, but get next error while attempt to submit changes - "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." Here is code: public static void SaveTransaction(InfoToLog info) { using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew)) { using (TransactionLogDataContext transactionDC = new TransactionLogDataContext()) { transactionDC.MyInfo.InsertOnSubmit(info); transactionDC.SubmitChanges(); } scope.Complete(); } } Please help me. Thx.

    Read the article

  • Convert XAML to FlowDocument to display in RichTextBox in WPF

    - by Erika
    I have some HTML, which i am converting to XAML using the library provided by Microsoft string t = HtmlToXamlConverter.ConvertHtmlToXaml(mail.HtmlDataString,true); now, from http://stackoverflow.com/questions/1449121/how-to-insert-xaml-into-richtextbox i am using the following: private static FlowDocument SetRTF(string xamlString) { StringReader stringReader = new StringReader(xamlString); System.Xml.XmlReader xmlReader = System.Xml.XmlReader.Create(stringReader); Section sec = XamlReader.Load(xmlReader) as Section; FlowDocument doc = new FlowDocument(); while (sec.Blocks.Count > 0) doc.Blocks.Add(sec.Blocks.FirstBlock); return doc; } This however keeps crashing unfortunately =/ Does anyone have any clue on how to display XAML text in a RichTextBox please?

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • Dependency Injection Constructor Madness

    - by JP
    I find that my constructors are starting to look like this: public MyClass(Container con, SomeClass1 obj1, SomeClass2, obj2.... ) with ever increasing parameter list. Since "Container" is my dependency injection container, why can't I just do this: public MyClass(Container con) for every class? What are the downsides? If I do this, it feels like I'm using a glorified static. Please share your thoughts on IoC and Dependency Injection madness. Thanks in advance. -JP

    Read the article

  • Looping to provide multiple lines in linechart (django-googlecharts)

    - by mighty_bombero
    Hi, I'm trying to generate some charts using django-googlecharts. This works fine for rather static data but in one case I would like to render a different number of lines, based on a variable. I tried this: {% chart %} {% for line in line_data %} {% chart-data line %} {% endfor %} {% chart-size "390x200" %} {% chart-type "line" %} {% chart-labels days %} {% endchart %} Line data is a list containing lists. The template code fails with "Caught an exception while rendering: max() arg is an empty sequence". I guess the problem is that I try to loop over templatetags. What approach could be used here? Or am I completely missing something? Is this doable using inclusion tags? Thanks for your help.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >