Search Results

Search found 2286 results on 92 pages for 'benefits'.

Page 78/92 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to mimic built-in .NET serialization idioms?

    - by Matt Enright
    I have a library (written in C#) for which I need to read/write representations of my objects to disk (or to any Stream) in a particular binary format (to ensure compatibility with C/Java library implementations). The format requires a fair amount of bit-packing and some DEFLATE'd bytestreams. I would like my library, however, to be as idiomatic .NET as possible, however, and so would like to provide an API as close as possible to the normal binary serialization process. I'm aware of the ability to implement the IFormatter interface, but being that I really am unable to reuse any part of the built-in serialization stack, is it worth doing this, or will it just bring unnecessary overhead. In other words: Implement IFormatter and co. OR Just provide "Serialize"/"Deserialize" methods that act on a Stream? A good point brought up below about needing the serialization semantics for any case involving Remoting. In a case where using MarshalByRef objects is feasible, I'm pretty sure that this won't be an issue, so leaving that aside are there any benefits or drawbacks to using the ISerializable/IFormatter versus a custom stack (or, is my understanding remoting incorrectly)?

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • MVVM: Thin ViewModels and Rich Models

    - by Dan Bryant
    I'm continuing to struggle with the MVVM pattern and, in attempting to create a practical design for a small/medium project, have run into a number of challenges. One of these challenges is figuring out how to get the benefits of decoupling with this pattern without creating a lot of repetitive, hard-to-maintain code. My current strategy has been to create 'rich' Model classes. They are fully aware that they will be consumed by an MVVM pattern and implement INotifyPropertyChanged, allow their collections to be observed and remain cognizant that they may always be under observation. My ViewModel classes tend to be thin, only exposing properties when data actually needs to be transformed, with the bulk of their code being RelayCommand handlers. Views happily bind to either ViewModels or Models directly, depending on whether any data transformation is required. I use AOP (via Postsharp) to ease the pain of INotifyPropertyChanged, making it easy to make all of my Model classes 'rich' in this way. Are there significant disadvantages to using this approach? Can I assume that the ViewModel and View are so tightly coupled that if I need new data transformation for the View, I can simply add it to the ViewModel as needed?

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • Mostly PJAX site with some AngularJS

    - by jrhicks
    I have a site using PJAX all over the place and I have a few pages that are using AngularJS. For the AngularJS pages I would like to continue to use PJAX to get all the benefits associated with not reloading the entire HTML page, assets etc. Unfortunately, PJAX just loads some HTML into the page and doesn't fire any javascript. This is okay, because I can fire the javascript manually on pjax success, but I can't quite figure out what makes AngularJS initialize. For a simple scenario, lets say I AJAX the following HTML into a page. Also assume, the page already had Angular.js included. What could I call to have the following behave like an Angular App? <div> <label>Name:</label> <input type="text" ng-model="yourName" placeholder="Enter a name here"> <hr> <h1>Hello {{yourName}}!</h1> </div> Thanks

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • Creating a scm browser for an RCP application.

    - by mdamman
    I have an RCP app that saves its project as an xml file and currently the user just selections a directory to save that file and then uses the open file dialog to open the project. We are thinking about enhancing it to allow users to check in/out from a source code manager. This will make it easier for users to share their projects with each others with all the benefits of a scm. I need something similar to Subclipses, but i was thinking of using the maven svn plugin so that it is more flexible which on which scm is used. It would probably better to keep it simple because most users won't have a clue what a scm is. An ideal would be just having a Checkout menu option which opens a dialog similar to the Open File dialog. I was wondering if anyone had an example of how to use the maven scm. What calls to make to set the scm location and to get the file? Or if there is a better way of going about this. Thanks!

    Read the article

  • How to design Models the correct way: Object-oriented or "Package"-oriented?

    - by ajsie
    I know that in OOP you want every object (from a class) to be a "thing", eg. user, validator etc. I know the basics about MVC, how they different parts interact with each other. However, i wonder if the models in MVC should be designed according to the traditional OOP design, that is to say, should every model be a database/table/row (solution 2)? Or is the intention more like to collect methods that are affecting the same table or a bunch of related tables (solution 1). example for an Address book module in CodeIgniter, where i want be able to "CRUD" a Contact and add/remove it to/from a CRUD-able Contact Group. Models solution 1: bunching all related methods together (not real object, rather a "package") class Contacts extends Model { function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } Models solution 2: the OOP way (one class per file) class Contact extends Model { private $name = ''; private $id = ''; function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} } class ContactGroup extends Model { private $name = ''; private $id = ''; function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } i dont know how to think when i want to create the models. and the above examples are my real tasks for creating an Address book. Should i just bunch all functions together in one class. then the class contains different logic (contact and group), so it can not hold properties that are specific for either one of them. the solution 2 works according to the OOP. but i dont know why i should make such a dividing. what would the benefits be to have a Contact object for example. Its surely not a User object, so why should a Contact "live" with its own state (properties and methods). you experienced guys with OOP/MVC, please shed a light on how one should think here in this very concrete task.

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • Format for storing contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • Browser-based application or stand-alone GUI app?

    - by crystalattice
    I'm sure this has been asked before, but I can't find it. What are the benefits/limitations of using a browser-based interface for a stand-alone application vs. using a normal GUI framework? I'm working on a Python program currently implement with wxPython for the GUI. The application is simply user-entry forms and dialogs. I am considering moving to PyQt because of the widgets it has (for future expansion), then I realized I could probably just use a browser to do much of the same stuff. The application currently doesn't require Internet access, though it's a possibility in the future. I was thinking of using Karrigell for the web framework if I go browser-based. Edit For clarification, as of right now the application would be browser-based, not web-based. All the information would be stored locally on the client computer; no server calls would need to be made and no Internet access required (it may come later though). It would simply be a browser GUI instead of a wxPython/PyQt GUI. Hope that makes sense.

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • How would the 'Model' in a Rails-type webapp be implemented in a functional programming langauge?

    - by ceptorial
    In MVC web development frameworks such as Ruby on Rails, Django, and CakePHP, HTTP requests are routed to controllers, which fetch objects which are usually persisted to a backend database store. These objects represent things like users, blog posts, etc., and often contain logic within their methods for permissions, fetching and/or mutating other objects, validation, etc. These frameworks are all very much object oriented. I've been reading up recently on functional programming and it seems to tout tremendous benefits such as testability, conciseness, modularity, etc. However most of the examples I've seen for functional programming implement trivial functionality like quicksort or the fibonnacci sequence, not complex webapps. I've looked at a few 'functional' web frameworks, and they all seem to implement the view and controller just fine, but largely skip over the whole 'model' and 'persistence' part. (I'm talking more about frameworks like Compojure which are supposed to be purely functional, versus something Lift which conveniently seems to use the OO part of Scala for the model -- but correct me if I'm wrong here.) I haven't seen a good explanation of how functional programming can be used to provide the metaphor that OO programming provides, i.e. tables map to objects, and objects can have methods which provide powerful, encapsulated logic such as permissioning and validation. Also the whole concept of using SQL queries to persist data seems to violate the whole 'side effects' concept. Could someone provide an explanation of how the 'model' layer would be implemented in a functionally programmed web framework?

    Read the article

  • Can Core Data be used for objects with variable schemas?

    - by glenc
    I'm implementing a new iPhone app and am relatively new to Cocoa development overall. I am at the stage of choosing how the persistence layer of this app will work, and it looks like I'm basically choosing between Core Data and sqlite3. The persisted models in this app are intended to have a schema that is loaded at runtime (from some kind of defn file, probably XML). By which I mean, this app is intended to have objects that are user-definable to some extent, e.g. the Customer type (which has certain built-in fields like "name" and "email") can be modified to have extra fields based on the user's specific needs (e.g. a user might want to add a "favourite fruit" field to their Customer type). Having said that, will Core Data work for an app with a non-baked-in data model like this? I've just started playing around with the Core Data object designer thing in XCode and it seems like this thing wants to work with objects that have fixed fields that are compiled in. I'm definitely trying to take the path of least resistance here, and I can see the benefits of using an Apple-supplied data framework, but don't want to start down that path if it's going to lock me into a data model that's defined at compile time.

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • Using DTOs and BOs

    - by ryanzec
    One area of question for me about DTOs/BOs is about when to pass/return the DTOs and when to pass/return the BOs. My gut reaction tells me to always map NHibernate to the DTOs, not BOs, and always pass/return the DTOs. Then whenever I needed to perform business logic, I would convert my DTO into a BO. The way I would do this is that my BO would have a have a constructor that takes a parameter that is the type of my interface (that defines the required fields/properties) that both my DTO and BO implement as the only argument. Then I would be able to create my BO by passing it the DTO in the constructor (since both with implement the same interface, they both with have the same properties) and then be able to perform my business logic with that BO. I would then also have a way to convert a BO to a DTO. However, I have also seen where people seem to only work with BOs and only work with DTOs in the background where to the user, it looks like there are no DTOs. What benefits/downfalls are there with this architecture vs always using BO's? Should I always being passing/returning either DTOs or BOs or mix and match (seems like mixing and matching could get confusing)?

    Read the article

  • Data Annotations on ViewModels or Domain Objects

    - by Ahmad
    Where would data annotations be more suitable: ViewModels or Domain Objects or Both I am struggling to decide where these will be more suited. I have not as yet fully utilized them but this question came to mind. From most of the examples I have seen, they are generally placed on Models and simply use the required attributes for validation using ModelState.IsValid. I have also seen another question on SO where the use of data annotations alone is not sufficient and advocate. Option 1 - I will still need to validate again in my service layer. ( I think that my service layer should be complete and this include validation, since its planned to be used elsewhere) Option 2 - How will I then get the benefits of the built in validation both client and server side. Option 3 - there will be a repetition of validation logic, however I was wondering if one could use a MetaData class approach that can be used for both ViewModels and Domain Objects. ( This is completely of the top of my head, so it may be nonsensical) I wonder if this question even makes sense. If not, can someone please help in understanding this better. Have I completely misunderstood the use of data annotations?

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • backbone js or knockout js as a web framework with jquery mobile

    - by Dan
    without trying to cause a mass discussion I would like some advice from the fellow users of stack overflow. I am about to start building a mobile website that gets it data from JSON that comes from a PHP rest api. I have looked into different mobile frameworks and feel that JQM will work best for us as we have the knowledge of jQuery even though a little large. Currently at work however we are using jQuery for all our sites and realise that now we are building a mobile website I need to think about javascript frameworks to move us onto a more MV* approach, which I understand the benefits of and will bring much needed structure to this mobile site and future web applications we may bring. I have made a comparision table where I have managed to bring the selection down to 2 - backbone and knockout. I have been looking around the web and it seems that there is more support for backbone in general and maybe even more support for backbone with JQM. http://jquerymobile.com/test/docs/pages/backbone-require.html One thing I have noticed however is that backbone doesnt support view bindings (declarative approach) whereas knockout does - is this a massive bonus? one of the main reasons for using a mv* for us is to get more structure - so I would like to use the library that will intergrate best with jQuery and especially jQuery mobile. neither of them seem to have that similar syntax... Thanks

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >