Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 115/705 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • Iterative and Incremental Principle Series 4: Iteration Planning – (a.k.a What should I do today?)

    - by llowitz
    Welcome back to the fourth of a five part series on applying the Iteration and Incremental principle.  During the last segment, we discussed how the Implementation Plan includes the number of the iterations for a project, but not the specifics about what will occur during each iteration.  Today, we will explore Iteration Planning and discuss how and when to plan your iterations. As mentioned yesterday, OUM prescribes initially planning your project approach at a high level by creating an Implementation Plan.  As the project moves through the lifecycle, the plan is progressively refined.  Specifically, the details of each iteration is planned prior to the iteration start. The Iteration Plan starts by identifying the iteration goal.  An example of an iteration goal during the OUM Elaboration Phase may be to complete the RD.140.2 Create Requirements Specification for a specific set of requirements.  Another project may determine that their iteration goal is to focus on a smaller set of requirements, but to complete both the RD.140.2 Create Requirements Specification and the AN.100.1 Prepare Analysis Specification.  In an OUM project, the Iteration Plan needs to identify both the iteration goal – how far along the implementation lifecycle you plan to be, and the scope of work for the iteration.  Since each iteration typically ranges from 2 weeks to 6 weeks, it is important to identify a scope of work that is achievable, yet challenging, given the iteration goal and timeframe.  OUM provides specific guidelines and techniques to help prioritize the scope of work based on criteria such as risk, complexity, customer priority and dependency.  In OUM, this prioritization helps focus early iterations on the high risk, architecturally significant items helping to mitigate overall project risk.  Central to the prioritization is the MoSCoW (Must Have, Should Have, Could Have, and Won’t Have) list.   The result of the MoSCoW prioritization is an Iteration Group.  This is a scope of work to be worked on as a group during one or more iterations.  As I mentioned during yesterday’s blog, it is pointless to plan my daily exercise in advance since several factors, including the weather, influence what exercise I perform each day.  Therefore, every morning I perform Iteration Planning.   My “Iteration Plan” includes the type of exercise for the day (run, bike, elliptical), whether I will exercise outside or at the gym, and how many interval sets I plan to complete.    I use several factors to prioritize the type of exercise that I perform each day.  Since running outside is my highest priority, I try to complete it early in the week to minimize the risk of not meeting my overall goal of doing it twice each week.  Regardless of the specific exercise I select, I follow the guidelines in my Implementation Plan by applying the 6-minute interval sets.  Just as in OUM, the iteration goal should be in context of the overall Implementation Plan, and the iteration goal should move the project closer to achieving the phase milestone goals. Having an Implementation Plan details the strategy of what I plan to do and keeps me on track, while the Iteration Plan affords me the flexibility to juggle what I do each day based on external influences thus maximizing my overall success. Tomorrow I’ll conclude the series on applying the Iterative and Incremental approach by discussing how to manage the iteration duration and highlighting some benefits of applying this principle.

    Read the article

  • Selecting the correct installer to install Oracle Weblogic Server

    - by PratikS -- Oracle
    When ever we start learning about a software product, the first step is to get the software installer and install it.Before we start with "How to install Oracle Weblogic Server?", lets understand the different kinds of installers available for Oracle Weblogic Server and select the correct installer.There are three different kinds of Weblogic server Installers: Package Installer Development-only and supplemental installers Upgrade Installer 1) Package Installer: If you have never installed Oracle Weblogic Server and this is the first time you are installing it, then what you need is a Oracle Weblogic Server's Package Installer.Again there are two different kinds of Package Installers:    a) Generic Package installer:         It does not include JAVA runtime. (When using "Generic Package installer" it is a prerequisite that a supported JDK should be installed)         If you want to install weblogic server with 64bit JVM, you have to use "Generic Package installer".         "Generic Package installer" is platform independent and can be used to install weblogic server on any supported 32bit or 64bit platform.     b) OS-specific Package installer         As the name suggests the installer is platform specific.         It is meant for installation with a 32bit JVM only.         Both SUN and JROCKIT 32 bit JDKs come bundled with "OS-specific Package installer", so no need to install the JDK in advance. 2) Development-only and supplemental installers:         If you have no plans to use the Oracle Weblogic Server in Production and need a simple installer for testing purpose only, then use this installer.         Download the zip distribution, unzip it and its ready to use. 3) Upgrade Installer:         Upgrade installer is used to upgrade a Oracle Weblogic Server installation from one minor version to a higher minor version.         There are no installers available to upgrade Oracle Weblogic Server Installation from one major version to another, though Domain Upgrade is always available. Note:Following are the different versions of Oracle Weblogic Server in ascending order(excluding versions before WLS 9.2): WLS 9.2.x WLS 10.0.x WLS 10.3.x WLS 12.1.x Where "x" denotes the minor version, 9.2, 10.0,10.3 and 12.1 are the major versions.So you may use the upgrade installer to upgrade from WLS 10.3.1 to 10.3.6, or 10.0.1 to 10.0.2 etc.  ------------------------------------- Important links to refer: Oracle Weblogic Server Documentation Supported Configuration Installation Guide for Oracle WebLogic Server

    Read the article

  • Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect

    - by user287529
    Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect. I'm using Ubuntu 12.04 on an Acer Aspire 5742-7645 laptop with 4GB memory, Intel Core i3 processor, Intel HD Graphics, DVD drive, 802.1 b/g/n, and 500 GB HD. I connect to my router via a wireless connection. I have installed Virutalbox 4.1.12 from the Ubuntu Software Center and installed Guest additions 4.1.12 in the Windows 7 guest session. I have Windows XP and Windows 7 installed as guests in Virtual box The network settings are different for XP and 7 – see below. Network Settings XP guest = Adapter 1: PCnet-FAST III (NAT) - Network works perfectly and has worked well for several years. Network Settings Win 7 = Adapter 1: Intel PRO/1000 MT Desktop (Bridged adapter, eth1) Promiscuous Mode = allow all Cable connected = checked When I originally installed Windows 7, I tried NAT and the guest network would not connect. Once I changed the setting to the above (Bridged) the Network worked perfectly. However, what I believe is after updates (not sure if it was an Ubuntu or Windows update) the guest network stopped working and I can not get it to connect. Interfaces file content auto lo iface lo inet loopback Ifconfig yields lou@lou-Aspire-5742:~$ ifconfig eth0 Link encap:Ethernet HWaddr 1c:75:08:09:f6:5c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 Link encap:Ethernet HWaddr 4c:0f:6e:7c:9f:01 inet addr:192.168.1.104 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4e0f:6eff:fe7c:9f01/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18095 errors:2 dropped:0 overruns:0 frame:24344 TX packets:9281 errors:47 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5301926 (5.3 MB) TX bytes:1441885 (1.4 MB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3208 errors:0 dropped:0 overruns:0 frame:0 TX packets:3208 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:294088 (294.0 KB) TX bytes:294088 (294.0 KB) Ipconfig yields the following: Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::38ba:dbca:a21d:c3d1%13 Autoconfiguration IPv4 Address. . : 169.254.195.209 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{B292E440-679D-4FC5-8E34-77D6804669C8}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : I'm not sure what else to do. Can someone provide the troubleshooting steps to determine what the problem is and possible solution?

    Read the article

  • Designing An ACL Based Permission System

    - by ryanzec
    I am trying to create a permissions system where everything is going to be stored in MySQL (or some database) and pulled using PHP for a project management system I am building.  I am right now trying to do it is an ACL kind of way.  There are a number key features I want to be able to support: 1.  Being able to assign permissions without being tied to a specific object. The reason for this is that I want to be able to selectively show/hide elements of the UI based on permissions at a point where I am not directly looking at a domain object instance.  For instance, a button to create a new project should only should only be shown to users that have the pm.project.create permission but obviously you can assign a create permission to an domain object instance (as it is already created). 2.  Not have to assign permissions for every single object. Obviously creating permissions entries for every single object (projects, tickets, comments, etc…) would become a nightmare to maintain so I want to have some level of permission inheritance. *3.  Be able to filter queries based on permissions. This would be a really nice to have but I am not sure if it is possible.  What I mean by this is say I have a page that list all projects.  I want the query that pulls all projects to incorporate the ACL so that it would not show projects that the current user does not have pm.project.read access to.  This would have to be incorporated into the main query as if it is a process that is done after that main query (which I know I could do) certain features like pagination become much more difficult. Right now this is my basic design for the tables: AclEntities id - the primary key key - the unique identifier for the domain object (usually the primary key of that object) parentId - the parent of the domain object (like the project object if this was a ticket object) aclDomainObjectId - metadata about the domain object AclDomainObjects id - primary key title - simple string to unique identify the domain object(ie. project, ticket, comment, etc…) fullyQualifiedClassName - the fully qualified class name for use in code (I am using namespaces) There would also be tables mapping AclEntities to Users and UserGroups. I also have this interface that all acl entity based object have to implement: IAclEntity getAclKey() - to the the unique key for this specific instance of the acl domain object (generally return the primary key or a concatenated string of a composite primary key) getAclTitle() - to get the unique title for the domain object (generally just returning a static string) getAclDisplayString() - get the string that represents this entity (generally one or more field on the object) getAclParentEntity() - get the parent acl entity object (or null if no parent) getAclEntity() - get the acl enitty object for this instance of the domain object (or null if one has not been created yet) hasPermission($permissionString, $user = null) - whether or not the user has the permission for this instance of the domain object static getFromAclEntityId($aclEntityId) - get a specific instance of the domain object from an acl entity id. Do any of these features I am looking for seems hard to support or are just way off base? Am I missing or not taking in account anything in my implementation? Is performance something I should keep in mind?

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Running MSBuild fails to read SDKToolsPath

    - by Scott Mayfield
    Howdy, I'm having a bit of an issue runnning a NAnt script that used to properly build my .Net 2.0 based website, when compiling with VS2008 and it's associated tools. I've recently upgraded all the project/solution files to VS2010, and now my build fails with the following error: [exec] C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(2249,9): error MSB3086: Task could not find "sgen.exe" using the S dkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed Now, I DO have prior versions (.Net 3.5) of the Windows SDK installed on the build server, and the full .Net 4.0 framework is installed, but I've not run across a .Net 4.0 specific version of the Windows SDK. After a bit of experimentation and research, I finally just setup a new environmental variable "SDKToolsPath" and pointed it to the copy of sgen.exe in my windows 6.0 sdk folder. This generated the same error, but it got me to notice that even though the SDKToolsPath environmental variable IS set (confirmed that I can "echo" it at the command line and it has the expected value), the error message seems to indicated that it's not being read (note the empty quotes). Most of the information I've found is .Net 3.5 (or earlier) specific. Not much 4.0 related out there yet. Searching for error code MSB3086 generated nothing useful either. Any idea what this might be? Scott

    Read the article

  • MS Access 2003: Can data disappear from records and how do I test for this and prevent it?

    - by user328960
    Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s. Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program). The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear. Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?

    Read the article

  • How to catch a Microsoft.SharePoint.SoapServer.SoapServerException?

    - by JL
    I am a bit perplexed on how to catch a specific error type of Microsoft.SharePoint.SoapServer.SoapServerException, I'll explain why, and I've included a code sample below for you guys to see. As you know there are 2 ways to interact with MOSS. The object model (only runs on MOSS Server) Web Services (can be run on a remote machine querying the MOSS server) So as per code sample I'm using web services to query MOSS, because of this I don't have sharepoint installed on the remote server running these web services and without MOSS installed its impossible to reference the SharePoint DLL to get the specific error type : Microsoft.SharePoint.SoapServer.SoapServerException. If I can't reference the DLL then how the heck am I supposed to catch this specific error type? System.Xml.XmlNode ndListView = wsLists.GetListAndView(ListName, ""); string strListID = ndListView.ChildNodes[0].Attributes["Name"].Value; string strViewID = ndListView.ChildNodes[1].Attributes["Name"].Value; System.Xml.XmlDocument doc = new System.Xml.XmlDocument(); System.Xml.XmlElement batchElement = doc.CreateElement("Batch"); batchElement.SetAttribute("OnError", "Continue"); batchElement.SetAttribute("ListVersion", "1"); batchElement.SetAttribute("ViewName", strViewID); batchElement.InnerXml = "<Method ID='1' Cmd='Update'>" + "<Field Name='DeliveryStatus'>" + newStatus.ToString() + "</Field>" + "<Where><Eq><FieldRef Name='ID' /><Value Type='Text'>" + id + "</Value></Eq></Where></Method>"; try { wsLists.UpdateListItems(strListID, batchElement); return true; } catch (Microsoft.SharePoint.SoapServer.SoapServerException ex) { }

    Read the article

  • Dynamic programming in VB

    - by Rahul Jain
    Hello Everybody, We develop applications for SAP using their SDK. SAP provides a SDK for changing and handling events occuring in the user interface. For example, with this SDK we can catch a click on a button and do something on the click. This programming can be done either VB or C#. This can also be used to create new fields on the pre-existing form. We have developed a specific application which allows users to store the definition required for new field in a database table and the fields are created at the run time. So far, this is good. What we require now is that the user should be able to store the validation code for the field in the database and the same should be executed on the run time. Following is an example of such an event: Private Sub SBO_Application_ItemEvent(ByVal FormUID As String, ByRef pVal As SAPbouiCOM.ItemEvent, ByRef BubbleEvent As Boolean) Handles SBO_Application.ItemEvent Dim oForm As SAPbouiCOM.Form If pVal.FormTypeEx = "ACC_QPLAN" Then If pVal.EventType = SAPbouiCOM.BoEventTypes.et_LOST_FOCUS And pVal.BeforeAction = False Then oProdRec.ItemPressEvent(pVal) End If End If End Sub Public Sub ItemPressEvent(ByRef pVal As SAPbouiCOM.ItemEvent) Dim oForm As SAPbouiCOM.Form oForm = oSuyash.SBO_Application.Forms.GetForm(pVal.FormTypeEx, pVal.FormTypeCount) If pVal.EventType = SAPbouiCOM.BoEventTypes.et_LOST_FOCUS And pVal.BeforeAction = False Then If pVal.ItemUID = "AC_TXT5" Then Dim CardCode, ItemCode As String ItemCode = oForm.Items.Item("AC_TXT2").Specific.Value CardCode = oForm.Items.Item("AC_TXT0").Specific.Value UpdateQty(oForm, CardCode, ItemCode) End If End If End Sub So, what we need in this case is to store the code given in the ItemPressEvent in a database, and execute this in runtime. I know this is not straight forward thing. But I presume there must be some ways of getting these kind of things done. The SDK is made up of COM components. Thanks & Regards, Rahul Jain

    Read the article

  • How important is PhD research topic to getting a job?

    - by thornate
    EDIT: This has been closed and I realise that I may not have been specific enough with the original title. I ask two questions here: The general one (Does a PhD help get a job?) which has been asked elsewhere, and the specific one (Is it possible to get work outside of the specific research field?). Assume I've already decided going to do the phd. I'm just stressing about the research topic. Well, I'm one year out of university (Mechatronics engineering and Software Eng double bachelors), worked for a few months then got retrenched (yay economy!). It's looking less and less likely that I'll get a job worth having with the job market as it is, so I'm thinking about going back to uni to do a PhD. I figure that by the time I'm done, the job market will have improved and hopefully I'll have something on my resume that is more attractive than spending three years doing customer support for accounting software. So, my question is to people who've done PhD's. Would you say that they were worth the effort? How important is the research topic to future job-seeking success? The idea I have is a computer-sciencey/neural-networks/data-mining thing which I think is very interesting, but not a field I want to be in forever. My potential supervisor claims that employers don't care so much about the topic of the research but rather the peripheral skills that are developed through a PhD; time managment, self-restraint, planning and whatnot. How does this mesh with people's real world experience? I'd appreciate any advice before signing my life on the line for the next three years. See also: Should developers go to grad school? Best reason not to hire a PhD? How to find an entry-level job after you already have a graduate degree?

    Read the article

  • Ideas on C# DSL syntax

    - by Dmitriy Nagirnyak
    Hi, I am thinking about implementing a templating engine using only the plain C#/.NET 4 syntax with the benefit of static typing. Then on top of that templating language we could create Domain Specific Languages (let's say HTML4, XHTML, HTML5, RSS, Atom, Multipart Emails and so on). One of the best DSLs in .NET 4 (if not only one) is SharpDOM. It implements HTML-specific DSL. Looking at SharpDOM, I am really impressed of what you can do using .NET (4). So I believe there are some not-so-well-known ways for implementing custom DSLs in .NET 4. Possibly not as well as in Ruby, but still. So my the question would be: what are the C# (4) specific syntax features that can be used for implementing custom DSLs? Examples I can think of right now: // HTML - doesn't look tooo readable :) div(clas: "head", ul(clas: "menu", id: "main-menu", () => { foreach(var item in allItems) { li(item.Name) } }) // See how much noise it has with all the closing brackets? ) // Plain text (Email or something) - probably too simple Line("Dear {0}", user.Name); Line("You have been kicked off from this site"); For me it is really hard to come up with the syntax with least amount of noise. Please NOTE that I am not talking about another language (Boo, IronRuby etc), neither I am not talking about different templating engines (NHaml, Spark, StringTemplate etc). Thanks, Dmitriy.

    Read the article

  • What Is The Proper Location For One-Offs In VCS Repos?

    - by Joe Clark
    I have recently started using Mercurial as our VCS. Over the years, I have used RCS, CVS, and - for the last 5 years - SVN. Back 13 years ago, when I primarily used CVS and RCS, large projects went into CVS and one-offs were edited in place on the specific server and stored in RCS. This worked well as the one-offs were usually specific to the server and the servers were backed up nightly. Jump forward a decade and a lot of the one-off scripts became less centralized - they might be needed on any server at some random time. This was also OK, because now I was a begrudging SVN user. Everything (except for docs) got dumped into one repo. Jump to 2010. Now I am using Mercurial and am putting large projects in their own repo again. But what to do with the one-offs? The options as I see them: A repo for each script. It seems a bit cluttered to create a repo for every one page script that might get ran once a year. RCS Not an option. There are many possible servers that might need a specific script. Continuing to use SVN just for one-offs. No. There no advantage I see over the next option. Create a repo in Mercurial named "one-offs". This seems the most workable. The last option seems the best to me - however; is there a best practice regarding this? You also might be wondering if these scripts are truly one-offs if they will be reused. Some of them may be reused 6 months or a year from now - some, never. However, nearly all of them involve several man-hours of work due to either complex logic or extensive error checking. Simply discarding them is not efficient.

    Read the article

  • How can I write reusable Javascript?

    - by RenderIn
    I've started to wrap my functions inside of Objects, e.g.: var Search = { carSearch: function(color) { }, peopleSearch: function(name) { }, ... } This helps a lot with readability, but I continue to have issues with reusabilty. To be more specific, the difficulty is in two areas: Receiving parameters. A lot of times I will have a search screen with multiple input fields and a button that calls the javascript search function. I have to either put a bunch of code in the onclick of the button to retrieve and then martial the values from the input fields into the function call, or I have to hardcode the HTML input field names/IDs so that I can subsequently retrieve them with Javascript. The solution I've settled on for this is to pass the field names/IDs into the function, which it then uses to retrieve the values from the input fields. This is simple but really seems improper. Returning values. The effect of most Javascript calls tends to be one in which some visual on the screen changes directly, or as a result of another action performed in the call. Reusability is toast when I put these screen-altering effects at the end of a function. For example, after a search is completed I need to display the results on the screen. How do others handle these issues? Putting my thinking cap on leads me to believe that I need to have an page-specific layer of Javascript between each use in my application and the generic methods I create which are to be used application-wide. Using the previous example, I would have a search button whose onclick calls a myPageSpecificSearchFunction, in which the search field IDs/names are hardcoded, which marshals the parameters and calls the generic search function. The generic function would return data/objects/variables only, and would not directly read from or make any changes to the DOM. The page-specific search function would then receive this data back and alter the DOM appropriately. Am I on the right path or is there a better pattern to handle the reuse of Javascript objects/methods?

    Read the article

  • Custom .NET DSL

    - by Dmitriy Nagirnyak
    Hi, I am thinking about implementing a templating engine using only the plain C#/.NET 4 syntax with the benefit of static typing. Then on top of that templating language we could create Domain Specific Languages (let's say HTML4, XHTML, HTML5, RSS, Atom, Multipart Emails and so on). One of the best DSLs in .NET 4 (if not only one) is SharpDOM. It implements HTML-specific DSL. Looking at SharpDOM, I am really impressed of what you can do using .NET (4). So I believe there are some not-so-well-known ways for implementing custom DSLs in .NET 4. Possibly not as well as in Ruby, but still. So my the question would be: what are the C# (4) specific syntax features that can be used for implementing custom DSLs? Examples I can think of right now: // HTML - doesn't look tooo readable :) div(clas: "head", ul(clas: "menu", id: "main-menu", () => { foreach(var item in allItems) { li(item.Name) } }) // See how much noise it has with all the closing brackets? ) // Plain text (Email or something) - probably too simple Line("Dear {0}", user.Name); Line("You have been kicked off from this site"); For me it is really hard to come up with the syntax with least amount of noise. Please NOTE that I am not talking about another language (Boo, IronRuby etc), neither I am not talking about different templating engines (NHaml, Spark, StringTemplate etc). Thanks, Dmitriy.

    Read the article

  • MD5 Hashing Given a Key in C#

    - by Jared
    I've been looking for a way to hash a given string in C# that uses a predetermined key. On my adventures through the internet trying to find an example i have seen lots of MD5CryptoServiceProvider examples which seem to use a default key for the machine, but none of them that apply a specific key. I need to have a specific key to encode data as to synchronize it to someone else's server. I hand them a hashed string and an ID number and they use that analyze the data and return a similar set to me. So is there anyway to get md5 to hash via a specific key that would be consistent to both. I would prefer this to be done in C#, but if its not possible with the libraries can you do so with some web languages like php or asp? Edit: Misunderstood the scenario I was thrown into and after a little sitting and thinking about why they would have me use a key it appears they want a key appended to the end of the string and hashed. That way the server can appended the key it has along with the data passed to ensure its a valid accessing computer. Anyways... thanks all ^_^ Edit2: As my comment below says, it was the term 'salting' I was oblivious to. Oh the joys of getting thrown into something new with no directions.

    Read the article

  • Adding database results to array

    - by Jason
    Hi all, I am having a mental blank here and cannot for the life of me figure out a solution. My scenario is that I am programming in PHP and MySQL. I have a database table returning the results for a specific orderid. The query can return a maximum of 4 rows per order and a minimum of 1 row. Here is an image of how I want to return the results. I have all the orderdetails (Name, address) ect stored in a table named "orders". I have all the packages for that order stored in a table named "packages". What I need to do is using a loop I need to have access to each specific element of the database results (IE package1, itemstype1, package2, itemtype2) ect I am using a query like this to try and get hold of just the "number of items: $sql = "SELECT * FROM bookings_onetime_packages WHERE orderid = '".$orderid."' ORDER BY packageid DESC"; $total = $db-database_num_rows($db-database_query($sql)); $query = $db-database_query($sql); $noitems = ''; while($info = $db-database_fetch_assoc($query)){ $numberitems = $info['numberofitems']; for($i=0; $i $noitems .= $numberitems[$i]; } } print $noitems; I need to have access to each specific element because I them need to create fill out a pdf template using "fpdf". I hope this makes sense. Any direction would be greatly appreciated.

    Read the article

  • Setting database-agnostic default column timestamp using Hibernate

    - by unsquared
    I'm working on a java project full of Hibernate (3.3.1) mapping files that have the following sort of declaration for most domain objects. <property name="dateCreated" generated="insert"> <column name="date_created" default="getdate()" /> </property> The problem here is that getdate() is an MSSQL specific function, and when I'm using something like H2 to test subsections of the project, H2 screams that getdate() isn't a recognized function. It's own timestamping function is current_timestamp(). I'd like to be able to keep working with H2 for testing, and wanted to know whether there was a way of telling Hibernate "use this database's own mechanism for retrieving the current timestamp". With H2, I've come up with the following solution. CREATE ALIAS getdate AS $$ java.util.Date now() { return new java.util.Date(); } $$; CALL getdate(); It works, but is obviously H2 specific. I've tried extending H2Dialect and registering the function getdate(), but that doesn't seem to be invoked when Hibernate is creating tables. Is it possible to abstract the idea of a default timestamp away from the specific database engine?

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >