Search Results

Search found 6723 results on 269 pages for 'django models'.

Page 140/269 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How can I add two models in one form, where one model is a has_many :through?

    - by Angela
    How do I model having multiple Addresses for a Company and assign a single Address to a Contact? Contacts belong_to a Company. A Company has_many Contacts. A Company also has_many Addresses. And each Contact has_one Address. I want to be able, whenever I create a New Contact, to access all the addresses in all Contacts that belong to the Company. Here is are my Models: class Company < ActiveRecord::Base attr_accessible :name, :phone, :addresses has_many :contacts has_many :addresses, :through => :contacts end class Contact < ActiveRecord::Base attr_accessible :first_name, :last_name, :title, :phone, :fax, :email, :company, :date_entered, :campaign_id, :company_name, :address belongs_to :company has_one :address accepts_nested_attributes_for :address end class Address < ActiveRecord::Base attr_accessible :street1 has_many :contacts end How do I create the View in the _form for Contacts so that I can 1) Add an Address when creating a Contact; 2) Display the options of the Address. Here is how I am doing step 1, which is just to add a new address for a new contact: <% f.fields_for :addresses do |builder| %> <p> <%= builder.label :street1, "Street 1" %> </br> <%= builder.text_field :street1 %> <p> Right now, what I have doesn't work. The console says I cannot mass-assign addresses when I hit "submit" on this New contact form.

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • How can I have a single helper work on different models passed to it?

    - by Angela
    I am probably going to need to refactor in two steps since I'm still developing the project and learning the use-cases as I go along since it is to scratch my own itch. I have three models: Letters, Calls, Emails. They have some similarilty, but I anticipate they also will have some different attributes as you can tell from their description. Ideally I could refactor them as Events, with a type as Letters, Calls, Emails, but didn't know how to extend subclasses. My immediate need is this: I have a helper which checks on the status of whether an email (for example) was sent to a specific contact: def show_email_status(contact, email) @contact_email = ContactEmail.find(:first, :conditions => {:contact_id => contact.id, :email_id => email.id }) if ! @contact_email.nil? return @contact_email.status end end I realized that I, of course, want to know the status for whether a call was made to a contact as well, so I wrote: def show_call_status(contact, call) @contact_call = ContactCall.find(:first, :conditions => {:contact_id => contact.id, :call_id => call.id }) if ! @contact_call.nil? return @contact_call.status end end I would love to be able to just have a single helper show_status where I can say show_status(contact,call) or show_status(contact,email) and it would know whether to look for the object @contact_call or @contact_email. Yes, it would be easier if it were just @contact_event, but I want to do a small refactoring while I get the program up and running, and this would make the ability to do a history for a given contact much easier. Thanks!

    Read the article

  • Why do I get 'Connection refused - connect(2)' for some models?

    - by Will
    I have a rails application running for the past 90 days that suddenly stopped working. Debugging the problem I found that I can read from the DB but not write to it. At least for certain models. There is one model that I can save whereas all others return Connection refused - connect(2) when I attempt to save them. They all used to work fine last month. I have no idea how to determine what the problem may be. Unfortunately I do not have access to the actual server remotely right now so I am limited in my debugging ability. I was able to get some non-tech people to run simple commands though that may help identify my problem. I will also be getting access tomorrow at some point. 1 Check from the console ./script/console >> a = Post.last.clone => #<Post id: nil, title: "test"... >> a.ex_id = 7 >> a.save Connection refused - connect(2) ... ... >> b = Story.last.console => #<Story id: nil, title: "test"... >> a.ex_id = 7 >> a.save => true I am not sure why this works for story and not post. This is consistent over many tests. 2 Check from mysql ./script/dbconsole -p mysql> INSERT INTO Posts (`title`,`body`, `ex_id`) SELECT `title`, `body`, 7 FROM Posts WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0 And as you can see I am able to write to the table with the same credientials that Rails uses? Does anyone know why I get connection refused in the console?

    Read the article

  • Which Code Should Go Where in MVC Structure

    - by Oguz
    My problem is in somewhere between model and controller.Everything works perfect for me when I use MVC just for crud (create, read, update, delete).I have separate models for each database table .I access these models from controller , to crud them . For example , in contacts application,I have actions (create, read, update, delete) in controller(contact) to use model's (contact) methods (create, read, update, delete). The problem starts when I try to do something more complicated. There are some complex processes which I do not know where should I put them. For example , in registering user process. I can not just finish this process in user model because , I have to use other models too (sending mails , creating other records for user via other models) and do lots of complex validations via other models. For example , in some complex searching processes , I have to access lots of models (articles, videos, images etc.) Or, sometimes , I have to use apis to decide what I will do next or which database model I will use to record data So where is the place to do this complicated processes. I do not want to do them in controllers , Because sometimes I should use these processes in other controllers too. And I do not want to put these process in models because , I use models as database access layers .May be I am wrong,I want to know . Thank you for your answer .

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • How to use a nested form for multiple models in one form?

    - by Magicked
    I'm struggling to come up with the proper way to design a form that will allow me to input data for two different models. The form is for an 'Incident', which has the following relationships: belongs_to :customer belongs_to :user has_one :incident_status has_many :incident_notes accepts_nested_attributes_for :incident_notes, :allow_destroy => false So an incident is assigned to a 'Customer' and a 'User', and the user is able to add 'Notes' to the incident. I'm having trouble with the notes part of the form. Here how the form is being submitted: {"commit"=>"Create", "authenticity_token"=>"ECH5Ziv7JAuzs53kt5m/njT9w39UJhfJEs2x0Ms2NA0=", "customer_id"=>"4", "incident"=>{"title"=>"Something bad", "incident_status_id"=>"2", "user_id"=>"2", "other_id"=>"AAA01-042310-001", "incident_note"=>{"note"=>"This is a note"}}} It appears to be attempting to add the incident_note as a field under 'Incident', rather than creating a new entry in the incident_note table with an incident_id foreign key linking back to the incident. Here is the 'IncidentNote' model: belongs_to :incident belongs_to :user Here is the form for 'Incident': <% form_for([@customer,@incident]) do |f| %> <%= f.error_messages %> <p> <%= f.label :other_id, "ID" %><br /> <%= f.text_field :capc_id %> </p> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= label_tag 'user', 'Assign to user?' %> <%= f.select :user_id, @users.collect {|u| [u.name, u.id]} %> </p> <p> <%= f.label :incident_status, 'Status?' %> <%= f.select :incident_status_id, @statuses.collect {|s| [s.name, s.id]} %> </p> <p> <% f.fields_for :incident_note do |inote_form| %> <%= inote_form.label :note, 'Add a Note' %> <%= inote_form.text_area :note, :cols => 40, :rows => 20 %> <% end %> </p> <p> <%= f.submit "Create" %> </p> <% end %> And finally, here are the incident_controller entries for New and Create. New: def new @customer = current_user.customer @incident = Incident.new @users = @customer.users @statuses = IncidentStatus.find(:all) @incident_note = IncidentNote.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @incident } end end Create: def create @users = @customer.users @statuses = IncidentStatus.find(:all) @incident = Incident.new(params[:incident]) @incident.customer = @customer @incident_note = @incident.incident_note.build(params[:incident_note]) @incident_note.user = current_user respond_to do |format| if @incident.save flash[:notice] = 'Incident was successfully created.' format.html { redirect_to(@incident) } format.xml { render :xml => @incident, :status => :created, :location => @incident } else format.html { render :action => "new" } format.xml { render :xml => @incident.errors, :status => :unprocessable_entity } end end end I'm not really sure where to look at this point. I'm sure it's just a limitation of my current Rails skill (I don't know much). So if anyone can point me in the right direction I would be very appreciative. Please let me know if more information is needed! Thanks!

    Read the article

  • How can I keep my MVC Views, models, and model binders as clean as possible?

    - by MBonig
    I'm rather new to MVC and as I'm getting into the whole framework more and more I'm finding the modelbinders are becoming tough to maintain. Let me explain... I am writing a basic CRUD-over-database app. My domain models are going to be very rich. In an attempt to keep my controllers as thin as possible I've set it up so that on Create/Edit commands the parameter for the action is a richly populated instance of my domain model. To do this I've implemented a custom model binder. As a result, though, this custom model binder is very specific to the view and the model. I've decided to just override the DefaultModelBinder that ships with MVC 2. In the case where the field being bound to my model is just a textbox (or something as simple), I just delegate to the base method. However, when I'm working with a dropdown or something more complex (the UI dictates that date and time are separate data entry fields but for the model it is one Property), I have to perform some checks and some manual data munging. The end result of this is that I have some pretty tight ties between the View and Binder. I'm architecturally fine with this but from a code maintenance standpoint, it's a nightmare. For example, my model I'm binding here is of type Log (this is the object I will get as a parameter on my Action). The "ServiceStateTime" is a property on Log. The form values of "log.ServiceStartDate" and "log.ServiceStartTime" are totally arbitrary and come from two textboxes on the form (Html.TextBox("log.ServiceStartTime",...)) protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { if (propertyDescriptor.Name == "ServiceStartTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceStartDate").ConvertTo(typeof (string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceStartTime").ConvertTo(typeof (string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } if (propertyDescriptor.Name == "ServiceEndTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceEndDate").ConvertTo(typeof(string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceEndTime").ConvertTo(typeof(string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } The Log.ServiceEndTime is a similar field. This doesn't feel very DRY to me. First, if I refactor the ServiceStartTime or ServiceEndTime into different field names, the text strings may get missed (although my refactoring tool of choice, R#, is pretty good at this sort of thing, it wouldn't cause a build-time failure and would only get caught by manual testing). Second, if I decided to arbitrarily change the descriptors "log.ServiceStartDate" and "log.ServiceStartTime", I would run into the same problem. To me, runtime silent errors are the worst kind of error out there. So, I see a couple of options to help here and would love to get some input from people who have come across some of these issues: Refactor any text strings in common between the view and model binders out into const strings attached to the ViewModel object I pass from controller to the aspx/ascx view. This pollutes the ViewModel object, though. Provide unit tests around all of the interactions. I'm a big proponent of unit tests and haven't started fleshing this option out but I've got a gut feeling that it won't save me from foot-shootings. If it matters, the Log and other entities in the system are persisted to the database using Fluent NHibernate. I really want to keep my controllers as thin as possible. So, any suggestions here are greatly welcomed! Thanks

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • 400 error with nginx subdomains over https

    - by aquavitae
    Not sure what I'm doing wrong, but I'm trying to get gunicorn/django through nginx using only https. Here is my nginx configuration: upstream app_server { server unix:/srv/django/app/run/gunicorn.sock fail_timeout=0; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name app.mydomain.com; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; client_max_body_size 4G; access_log /srv/django/app/logs/nginx-access.log; error_log /srv/django/app/logs/nginx-error.log; location /static/ { alias /srv/django/app/data/static/; } location /media/ { alias /wrv/django/app/data/media/; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; proxy_pass http://app_server; } } I get a 400 error on app.mydomain.com, but the app is published on mydomain.com. Is there an error in my configuration?

    Read the article

  • Is there a CPU that can be described as "Celeron D 4xx model"?

    - by romkyns
    The "D" letter after Celeron appears to only be used for processors numbered with 3xx. Celerons of the 4xx series do not seem to have the "D". And yet I am looking at a motherboard described as supporting these processors: Intel Celeron D 3xx and 4xx models Intel Pentium 4 5xx and 6xx models Intel Pentium D 8xx and 9xx models Intel Core 2 Duo models with LGA775 Is this compatible with a Celeron 450, sSpec SLAFZ, despite not having a "D" in its name?

    Read the article

  • How to implement Template Inheritance (like Django?) in PHP5

    - by anonymous coward
    Is there an existing good example, or how should one approach creating a basic Template system (thinking MVC) that supports "Template Inheritance" in PHP5? For an example of what I define as Template Inheritance, refer to the Django (a Python framework for web development) Templates documentation: http://docs.djangoproject.com/en/dev/topics/templates/#id1 I especially like the idea of PHP itself being the "template language", though it's not necessarily a requirement. If listing existing solutions that implement "Template Inheritance", please try to form answers as individual systems, for the benefit of 'popular vote'.

    Read the article

  • Grails benchmarks compared to other web MVC platform (Rails, Django, ASP MVC)?

    - by fabien7474
    I have been searching the web for recent benchmarks measuring Grails overall performance compared to its competitors (Rails, Django, ASP.NET MVC...), but I didn't find anything more recent than a 3 years-old article with obsolete grails version (0.5). See here and here. So, starting from grails 1.2, are there any more recent grails benchmarks you are aware of ? Or do you have your own performance tests for grails (compared to others if possible) ?

    Read the article

  • has any tools easy to download or uploaed data from gae ..

    - by zjm1126
    i find this: http://aralbalkan.com/1784 but it is : Gaebar is an easy-to-use, standalone Django application that you can plug in to your existing Google App Engine Django or app-engine-patch-based Django applications on Google App Engine to give them datastore backup and restore functionality. my app is not based on django,so did you know any tools esay to do this . thanks

    Read the article

  • Common DataAnnotations in ASP.Net MVC2

    - by Scott Mayfield
    Howdy, I have what should be a simple question. I have a set of validations that use System.CompontentModel.DataAnnotations . I have some validations that are specific to certain view models, so I'm comfortable with having the validation code in the same file as my models (as in the default AccountModels.cs file that ships with MVC2). But I have some common validations that apply to several models as well (valid email address format for example). When I cut/paste that validation to the second model that needs it, of course I get a duplicate definition error because they're in the same namespace (projectName.Models). So I thought of removing the common validations to a separate class within the namespace, expecting that all of my view models would be able to access the validations from there. Unexpectedly, the validations are no longer accessible. I've verified that they are still in the same namespace, and they are all public. I wouldn't expect that I would have to have any specific reference to them (tried adding using statement for the same namespace, but that didn't resolve it, and via the add references dialog, a project can't reference itself (makes sense). So any idea why public validations that have simply been moved to another file in the same namespace aren't visible to my models? CommonValidations.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Text.RegularExpressions; namespace ProjectName.Models { public class CommonValidations { [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = true, Inherited = true)] public sealed class EmailFormatValidAttribute : ValidationAttribute { public override bool IsValid(object value) { if (value != null) { var expression = @"^[a-zA-Z][\w\.-]*[a-zA-Z0-9]@[a-zA-Z0-9][\w\.-]*[a-zA-Z0-9]\.[a-zA-Z][a-zA-Z\.]*[a-zA-Z]$"; return Regex.IsMatch(value.ToString(), expression); } else { return false; } } } } } And here's the code that I want to use the validation from: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using Growums.Models; namespace ProjectName.Models { public class PrivacyModel { [Required(ErrorMessage="Required")] [EmailFormatValid(ErrorMessage="Invalid Email")] public string Email { get; set; } } }

    Read the article

  • The model item passed into the dictionary is of type ‘mvc.Models.ModelA’ but this dictionary require

    - by Malcolm Frexner
    I have this annoying mistake in some of my builds. There is no error in the project, because if I build again, then the problem disappears. The message only appears, when the site is deployed to a Windows 2008 Server. I first thought that it might be an issue with temporary files, but thats not the case. I deployed the build to a different web and the error still appears. The error appears on random actions of the site. Most of the time builds are ok, but each 3rd or 4th build produces runtime errors. I build using a WebdeploymentProject in release mode. Views are precompiled. It's not http://stackoverflow.com/questions/178194/in-asp-net-mvc-i-encounter-an-incorrect-type-error-when-rendering-a-page-with-the, because views have totally different names. How I can debug this problem or how I can get help for this? Here is my WebDeploymentProject <!-- Microsoft Visual Studio 2008 Web Deployment Project http://go.microsoft.com/fwlink/?LinkID=104956 --> <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform> <ProductVersion>9.0.21022</ProductVersion> <SchemaVersion>2.0</SchemaVersion> <ProjectGuid>{E5E14CEB-0BCD-4203-9A5A-34ABA9C717EA}</ProjectGuid> <SourceWebPhysicalPath>..\B2CWeb</SourceWebPhysicalPath> <SourceWebProject>{3E632DB6-6DB3-4BD0-8CCA-12DE67165B48}|B2CWeb\B2CWeb.csproj</SourceWebProject> <SourceWebVirtualPath>/B2CWeb.csproj</SourceWebVirtualPath> <TargetFrameworkVersion>v3.5</TargetFrameworkVersion> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <OutputPath>.\Debug</OutputPath> <EnableUpdateable>false</EnableUpdateable> <UseMerge>true</UseMerge> <SingleAssemblyName>B2CWeb_Build</SingleAssemblyName> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> <DebugSymbols>false</DebugSymbols> <OutputPath>..\B2CWeb_Deploy\</OutputPath> <EnableUpdateable>false</EnableUpdateable> <UseMerge>true</UseMerge> <SingleAssemblyName>B2C_Web</SingleAssemblyName> <ContentAssemblyName> </ContentAssemblyName> <DeleteAppCodeCompiledFiles>false</DeleteAppCodeCompiledFiles> </PropertyGroup> <ItemGroup> </ItemGroup> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project>

    Read the article

  • How to "DRY up" C# attributes in Models and ViewModels?

    - by DanM
    This question was inspired by my struggles with ASP.NET MVC, but I think it applies to other situations as well. Let's say I have an ORM-generated Model and two ViewModels (one for a "details" view and one for an "edit" view): Model public class FooModel // ORM generated { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } public int CategoryId { get; set; } } Display ViewModel public class FooDisplayViewModel // use for "details" view { [DisplayName("ID Number")] public int Id { get; set; } [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Email Address")] [DataType("EmailAddress")] public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] public string CategoryName { get; set; } } Edit ViewModel public class FooEditViewModel // use for "edit" view { [DisplayName("First Name")] // not DRY public string FirstName { get; set; } [DisplayName("Last Name")] // not DRY public string LastName { get; set; } [DisplayName("Email Address")] // not DRY [DataType("EmailAddress")] // not DRY public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] // not DRY public SelectList Categories { get; set; } } Note that the attributes on the ViewModels are not DRY--a lot of information is repeated. Now imagine this scenario multiplied by 10 or 100, and you can see that it can quickly become quite tedious and error prone to ensure consistency across ViewModels (and therefore across Views). How can I "DRY up" this code? Before you answer, "Just put all the attributes on FooModel," I've tried that, but it didn't work because I need to keep my ViewModels "flat". In other words, I can't just compose each ViewModel with a Model--I need my ViewModel to have only the properties (and attributes) that should be consumed by the View, and the View can't burrow into sub-properties to get at the values. Update LukLed's answer suggests using inheritance. This definitely reduces the amount of non-DRY code, but it doesn't eliminate it. Note that, in my example above, the DisplayName attribute for the Category property would need to be written twice because the data type of the property is different between the display and edit ViewModels. This isn't going to be a big deal on a small scale, but as the size and complexity of a project scales up (imagine a lot more properties, more attributes per property, more views per model), there is still the potentially for "repeating yourself" a fair amount. Perhaps I'm taking DRY too far here, but I'd still rather have all my "friendly names", data types, validation rules, etc. typed out only once.

    Read the article

  • Recommend good shared hosting [closed]

    - by Django Reinhardt
    It seems that everyone has something bad to say about the "big" shared hosting sites like 1and1, HostGator, GoDaddy, etc. but what are the ones you've had GOOD experiences with? I'm going to focus this question on LAMP stacks, given that they're the most popular option for shared hosting, but if you have an especially good experience with a different stack. Good shared hosting should be: Competitively priced - But not at the expense of... Fully featured - Email, PHP, MySQL, but what else? Highly customizable - Do you have access to advanced features like being able to deliver static content? Up to date - Do they run PHP4 as standard, or do they run the latest version? Customer service - When you have a problem are they rude and unhelpful? Do they take ages to reply? So how about it? Who have YOU have a good experience with?

    Read the article

  • Rails: Multiple "has_many through" for the two same models?

    - by neezer
    Can't wrap my head around this... class User < ActiveRecord::Base has_many :fantasies, :through => :fantasizings has_many :fantasizings, :dependent => :destroy end class Fantasy < ActiveRecord::Base has_many :users, :through => :fantasizings has_many :fantasizings, :dependent => :destroy end class Fantasizing < ActiveRecord::Base belongs_to :user belongs_to :fantasy end ... which works fine for my primary relationship, in that a User can have many Fantasies, and that a Fantasy can belong to many Users. However, I need to add another relationship for liking (as in, a User "likes" a Fantasy rather than "has" it... think of Facebook and how you can "like" a wall-post, even though it doesn't "belong" to you... in fact, the Facebook example is almost exactly what I'm aiming for). I gathered that I should make another association, but I'm kinda confused as to how I might use it, or if this is even the right approach. I started by adding the following: class Fantasy < ActiveRecord::Base ... has_many :users, :through => :approvals has_many :approvals, :dependent => :destroy end class User < ActiveRecord::Base ... has_many :fantasies, :through => :approvals has_many :approvals, :dependent => :destroy end class Approval < ActiveRecord::Base belongs_to :user belongs_to :fantasy end ... but how do I create the association through Approval rather than through Fantasizing? If someone could set me straight on this, I'd be much obliged!

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • 1and1: Unable to host an external domain

    - by Django Reinhardt
    I'm sorry if this isn't the right place for this question, but I'm presently having difficulties with my hosting provider (1and1). Two weeks ago, two of my clients bought hosting from them on my recommendation, but as it turned out, 1and1 are having severe technical difficulties. Right now non of their hosting packages are able to accept ANY external domains. So either you pay the costs of transferring the registrar of your domain, or you use the ugly 1and1 domain name. Not any good for a hosting company of 1and1's reputation! They have been promising me for two weeks that they're going to fix the problem, but as you have probably guessed by now, that hasn't been the case. I would like to know if a) Anyone else is in the same boat as me, and b) If there are other comparably reputable hosting providers that I should consider moving to instead? Very disappointing! :( Note: This is for 1and1 in the UK. I imagine it isn't affecting users in other countries(?) Clarification: 1and1 are unable to accept ANY external domains. That means that even if you update your DNS details on your domain, their system cannot be updated to add your external domain to your account.

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • Should I have different models and views for no user data than for some user data?

    - by Sam Holder
    I'm just starting to learn asp.net mvc and I'm not sure what the right thing to do is. I have a user and a user has a collection of (0 or more) reminders. I have a controller for the user which gets the reminders for the currently logged in user from a reminder service. It populates a model which holds some information about the user and the collection of reminders. My question is should I have 2 different views, one for when there are no reminders and one for when there are some reminders? Or should I have 1 view which checks the number of reminders and displays different things? Having one view seems wrong as then I end up with code in my view which says if (Model.Reminders.Count==0){//do something} else {do something else}, and having logic in the view feels wrong. But if I want to have 2 different views then I'd like to have some code like this in my controller: [Authorize] public ActionResult Index() { MembershipUser currentUser = m_membershipService.GetUser(); IList<Reminder> reminders = m_reminderRepository.GetReminders(currentUser); if (reminders.Count == 0) { var model = new EmptyReminderModel { Email = currentUser.Email, UserName = currentUser.UserName }; return View(model); } else { var model = new ReminderModel { Email = currentUser.Email, UserName = currentUser.UserName, Reminders = reminders }; return View(model); } but obviously this doesn't compile as the View can't take both different types. So if I'm going to do this should I return a specific named view from my controller, depending on the emptiness of the reminders, or should my Index() method redirect to other actions like so: [Authorize] public ActionResult Index() { MembershipUser currentUser = m_membershipService.GetUser(); IList<Reminder> reminders = m_reminderRepository.GetReminders(currentUser); if (reminders.Count == 0) { return RedirectToAction("ShowEmptyReminders"); } else { return RedirectToAction("ShowReminders"); } } which seems nicer but then I need to re-query the reminders for the current user in the ShowReminders action. Or should I be doing something else entirely?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >