Search Results

Search found 12952 results on 519 pages for 'model'.

Page 111/519 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Can one extract model fit parameters after a ggplot stat_smooth call?

    - by Alex Holcombe
    Using stat_smooth, I can fit models to data. E.g. g=ggplot(tips,aes(x=tip,y=as.numeric(unclass(factor(tips$sex))-1))) +facet_grid(time~.) g=g+ stat_summary(fun.y=mean,geom="point") g=g+ stat_smooth(method="glm", family="binomial") I would like to know the coefficients of the glm binomial fits. I could re-do the fit with dlply and get the coefficients with ldply, but I'd like to avoid such duplication. Calling str(g) reveals the hierarchy of objects that ggplot creates, perhaps there's some way to get to the coefficients through that?

    Read the article

  • How to efficiently get all instances from deeper level in Cocoa model?

    - by Johan Kool
    In my Cocoa Mac app I have an instance A which contains an unordered set of instances B which in turn has an ordered set of instances C. An instance of C can only be in one instance B and B only in one A.   I would like to have an unordered set of all instances C available on instance A. I could enumerate over all instances B each time, but that seems expensive for something I need to do often. However, I am a bit worried that keeping track of instances C in A could become cumbersome and be the cause of  inconsistencies, for example if an instance C gets removed from B but not from A.  Solution 1 Use a NSMutableSet in A and add or remove C instances whenever I do the same operation in B.  Solution 2 Use a weak referenced NSHashTable in A. When deleting a C from B, it should disappear for A as well.  Solution 3 Use key value observing in A to keep track of changes in B, and update a NSMutableSet in A accordingly.  Solution 4 Simply iterate over all instances B to create the set whenever I need it.   Which way is best? Are there any other approaches that I missed?  NB I don't and won't use CoreData for this app.

    Read the article

  • What is a good RPC model for building a AJAX web app using PHP & JS?

    - by user366152
    I'm new to writing AJAX applications. I plan on using jQuery on the client side while PHP on the server side. I want to use something like XML-RPC to simplify my effort in calling server-side code. Ideally, I wouldn't care whether the transport layer uses XML or JSON or a format more optimized for the wire. If I was writing a console app I'd use some tool to generate function stubs which I would then implement on the RPC server while the client would natively call into those stubs. This provides a clean separation. Is there something similar available in the AJAX world? While on this topic, how would I proceed with session management? I would want it to be as transparent as possible. For example, if I try to hit an RPC end-point which needs a valid session, it should reject the request if the client doesn't pass a valid session cookie. This would really ease my application development. I'd then have to simply handle the frontend using native JS functions. While on the backend, I can simply implement the RPC functions. BTW I dont wish to use Google Web Toolkit. My app wont be extremely heavy on AJAX.

    Read the article

  • Do you have to use zf tool when creating controller, model, action etc... in zend framework

    - by Andy
    I am using zend framework 1.10. I use the zf tool to create controllers, actions and everything else. It is handy, but I am now seeing that when it modifies existing controller files to add new actions it realigns my code and removes some function closing brackets. I then see all these errors in eclipse. I see that everytime i issue a zf command it modifies the .zfproject file. Is this file critical at all? I want to be able to create whatever I want by myself without the zf tool and worrying about that .zfproject file.

    Read the article

  • Capture Stored Procedure print output in .NET (Different model!)

    - by Workshop Alex
    Basically, this question with a difference... Is it possible to capture print output from a TSQL stored procedure in .NET, using the Entity Framework? The solution in the other question doesn't work for me. It works with the connection type from System.Data.SqlClient but I'm using the one from System.Data.EntityClient which does not have an InfoMessage event. (Of course, I could just create an SQL connection based on the Entity connection settings, but prefer to do it directly.)

    Read the article

  • How can I access the "through" object of a Django ManyToManyField?

    - by Macha
    I have the following models in my Django app. How can I from the Team model find all the User objects who have accepted as True in the Membership model? I know I need to use Team.objects.filter(), but I'm not sure how to check the value of the accepted field. from django.contrib.auth.models import User class Team(models.Model): members = models.ManyToManyField(User, through="Membership") class Membership(models.Model): user = models.ForeignKey(User) team = models.ForeignKey(Team) accepted = models.BooleanField(default=False)

    Read the article

  • How do I load a Direct X .x 3D model in iPhone SDK?

    - by Alex
    I have been searching the internet for the last few days trying to figure this out. My goal is to draw a textured and animated .x file exported from a 3D program. I found a tutorial of how to load and draw a .obj file, which I understand, but the tutorial doesn't say how to texture it, and .obj doesn't support animation. The .x file structure is human readable just like .obj, but I have no clue how to texture it, and I might be able to figure out how to animate it, but I would prefer to be instructed on that. Any help would be GREATLY appreciated.

    Read the article

  • [CakePHP] What is the best way to access another Model in a Controller?

    - by kwokwai
    Hi all, Say I got two Controllers like this Table1sController, and Table2sController. with corresponding Models: Table1sModel, Table2sModel In the Table1sController, I got this: $this-Table1sModel-action(); Say I want to access some data in Table2sModel How is it possible to do something like this in Table1sController I have tried this in Table1sController: $this-Table2sModel-action(); But I received an error message like this: Undefined property: Table1sController::$Table2sModel

    Read the article

  • Is canvas security model ignoring access-control-allow-origin headers?

    - by luklatlug
    It seems that even if you set the access-control-allow-origin header to allow access from mydomain.org to an image hosted on domain example.org, the canvas' origin-clean flag gets set to false, and trying to manipulate that image's pixel data will trigger a security exception. Shouldn't canvas' obey the access-control-allow-origin header and allow access to image's data without throwing an exception?

    Read the article

  • How can I configure Devise for Ruby on Rails to store the emails and passwords somewhere other than in the user model?

    - by TLK
    I'd like to store emails in a separate table and allow users to save multiple emails and log in with any of them. I'd also like to store passwords in a different table. How can I configure Devise to store authentication info elsewhere? Worst case scenario, if I just have to hack into it, is there a generator to just port everything over to the app? I noticed there was a generator for the views. Thanks.

    Read the article

  • How do I do a .count on the model an object belongs_to in rails?

    - by Angela
    I have @contacts_added defined as follows: @contacts_added = Contact.all(:conditions => ["date_entered >?", 5.days.ago.to_date]) Each contact belongs_to a Company. I want to be able the count the number of distinct Companies that @contacts_added belong to. contacts_added will have many contacts that belong to a single company, accessible through a virtual attribute contacts_added.company_name How do I do that?

    Read the article

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

  • Looking into Enum Support in Entity Framework 5.0 Code First

    - by nikolaosk
    In this post I will show you with a hands-on demo the enum support that is available in Visual Studio 2012, .Net Framework 4.5 and Entity Framework 5.0. You can have a look at this post to learn about the support of multilple diagrams per model that exists in Entity Framework 5.0. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. Before I move on to the actual demo I must say that in EF 5.0 an enumeration can have the following types. Byte Int16 Int32 Int64 Sbyte Obviously I cannot go into much detail on what EF is and what it does. I will give again a short introduction.The .Net framework provides support for Object Relational Mapping through EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach. In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework. This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. You can search in my blog, because I have posted many posts regarding ASP.Net and EF. I assume you have a working knowledge of C# and know a few things about EF. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext. Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class. We can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests. DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First). Let's begin building our sample application. 1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Now we need to make sure the Entity Framework is included in our project. Go to Solution Explorer, right-click on the project name.Then select Manage NuGet Packages...In the Manage NuGet Packages dialog, select the Online tab and choose the EntityFramework package.Finally click Install. Have a look at the picture below   4) Create a new folder. Name it CodeFirst . 5) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place it in the CodeFirst folder. The code follows public class Footballer { public int FootballerID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public double Weight { get; set; } public double Height { get; set; } public DateTime JoinedTheClub { get; set; } public int Age { get; set; } public List<Training> Trainings { get; set; } public FootballPositions Positions { get; set; } }    Now I am going to define my enum values in the same class file, Footballer.cs    public enum FootballPositions    {        Defender,        Midfielder,        Striker    } 6) Now we need to create the Training class. Add a new class to your application and place it in the CodeFirst folder.The code for the class follows.     public class Training     {         public int TrainingID { get; set; }         public int TrainingDuration { get; set; }         public string TrainingLocation { get; set; }     }   7) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows       public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }         public DbSet<Training> Trainings { get; set; }     } Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 8) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it,it will use a connection string in the web.config and will create the database based on the classes. In my case the connection string inside the web.config, looks like this      <connectionStrings>    <add name="CodeFirstDBContext"  connectionString="server=.\SqlExpress;integrated security=true;"  providerName="System.Data.SqlClient"/>                       </connectionStrings>   9) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.cs We will create a simple public method to retrieve the footballers. The code for the class follows public class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers where player.FirstName=="Jamie" select player;             return query.ToList();         }     }   10) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard. Build your application.  11)  Let's create an Insert method in order to insert data into the tables. I will create an Insert() method and for simplicity reasons I will place it in the Default.aspx.cs file. private void Insert()        {            var footballers = new List<Footballer>            {                new Footballer {                                 FirstName = "Steven",LastName="Gerrard", Height=1.85, Weight=85,Age=32, JoinedTheClub=DateTime.Parse("12/12/1999"),Positions=FootballPositions.Midfielder,                Trainings = new List<Training>                             {                                     new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                    new Training {TrainingDuration = 2, TrainingLocation="Anfield"},                    new Training {TrainingDuration = 2, TrainingLocation="MelWood"},                }                            },                            new Footballer {                                  FirstName = "Jamie",LastName="Garragher", Height=1.89, Weight=89,Age=34, JoinedTheClub=DateTime.Parse("12/02/2000"),Positions=FootballPositions.Defender,                Trainings = new List<Training>                                             {                                 new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                new Training {TrainingDuration = 5, TrainingLocation="Anfield"},                new Training {TrainingDuration = 6, TrainingLocation="Anfield"},                }                           }                    };            footballers.ForEach(foot => ctx.Footballers.Add(foot));            ctx.SaveChanges();        }   12) In the Page_Load() event handling routine I called the Insert() method.        protected void Page_Load(object sender, EventArgs e)        {                   Insert();                }  13) Run your application and you will see that the following result,hopefully. You can see clearly that the data is returned along with the enum value.  14) You must have also a look at the database.Launch SSMS and see the database and its objects (data) created from EF Code First.Have a look at the picture below. Hopefully now you have seen the support that exists in EF 5.0 for enums.Hope it helps !!!

    Read the article

  • Refactoring multiple interfaces to a common interface using MVVM, MEF and Silverlight4

    - by Brian
    I am just learning MVVM with MEF and already see the benefits but I am a little confused about some implementation details. The app I am building has several Models that do the same with with different entities (WCF RIA Services exposing a Entity framework object) and I would like to avoid implementing a similar interface/model for each view I need and the following is what I have come up with though it currently doesn't work. The common interface has a new completed event for each model that implements the base model, this was the easiest way I could implement a common class as the compiler did not like casting from a child to the base type. The code as it currently sits compiles and runs but the is a null IModel being passed into the [ImportingConstructor] for the FaqViewModel class. I have a common interface (simplified for posting) defined as follows, this should look familiar to those who have seen Shawn Wildermuth's RIAXboxGames sample. public interface IModel { void GetItemsAsync(); event EventHandler<EntityResultsArgs<faq>> GetFaqsComplete; } A base method that implements the interface public class ModelBase : IModel { public virtual void GetItemsAsync() { } public virtual event EventHandler<EntityResultsArgs<faq>> GetFaqsComplete; protected void PerformQuery<T>(EntityQuery<T> qry, EventHandler<EntityResultsArgs<T>> evt) where T : Entity { Context.Load(qry, r => { if (evt == null) return; try { if (r.HasError) { evt(this, new EntityResultsArgs<T>(r.Error)); } else if (r.Entities.Count() > 0) { evt(this, new EntityResultsArgs<T>(r.Entities)); } } catch (Exception ex) { evt(this, new EntityResultsArgs<T>(ex)); } }, null); } private DomainContext _domainContext; protected DomainContext Context { get { if (_domainContext == null) { _domainContext = new DomainContext(); _domainContext.PropertyChanged += DomainContext_PropertyChanged; } return _domainContext; } } void DomainContext_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { switch (e.PropertyName) { case "IsLoading": AppMessages.IsBusyMessage.Send(_domainContext.IsLoading); break; case "IsSubmitting": AppMessages.IsBusyMessage.Send(_domainContext.IsSubmitting); break; } } } A model that implements the base model [Export(ViewModelTypes.FaqViewModel, typeof(IModel))] public class FaqModel : ModelBase { public override void GetItemsAsync() { PerformQuery(Context.GetFaqsQuery(), GetFaqsComplete); } public override event EventHandler<EntityResultsArgs<faq>> GetFaqsComplete; } A view model [PartCreationPolicy(CreationPolicy.NonShared)] [Export(ViewModelTypes.FaqViewModel)] public class FaqViewModel : MyViewModelBase { private readonly IModel _model; [ImportingConstructor] public FaqViewModel(IModel model) { _model = model; _model.GetFaqsComplete += Model_GetFaqsComplete; _model.GetItemsAsync(); // Load FAQS on creation } private IEnumerable<faq> _faqs; public IEnumerable<faq> Faqs { get { return _faqs; } private set { if (value == _faqs) return; _faqs = value; RaisePropertyChanged("Faqs"); } } private faq _currentFaq; public faq CurrentFaq { get { return _currentFaq; } set { if (value == _currentFaq) return; _currentFaq = value; RaisePropertyChanged("CurrentFaq"); } } public void GetFaqsAsync() { _model.GetItemsAsync(); } void Model_GetFaqsComplete(object sender, EntityResultsArgs<faq> e) { if (e.Error != null) { ErrorMessage = e.Error.Message; } else { Faqs = e.Results; } } } And then finally the Silverlight view itself public partial class FrequentlyAskedQuestions { public FrequentlyAskedQuestions() { InitializeComponent(); if (!ViewModelBase.IsInDesignModeStatic) { // Use MEF To load the View Model CompositionInitializer.SatisfyImports(this); } } [Import(ViewModelTypes.FaqViewModel)] public object ViewModel { set { DataContext = value; } } }

    Read the article

  • Business rule validation of hierarchical list of objects ASP.NET MVC

    - by SergeanT
    I have a list of objects that are organized in a tree using a Depth property: public class Quota { [Range(0, int.MaxValue, ErrorMessage = "Please enter an amount above zero.")] public int Amount { get; set; } public int Depth { get; set; } [Required] [RegularExpression("^[a-zA-Z]+$")] public string Origin { get; set; } // ... another properties with validation attributes } For data example (amount - origin) 100 originA 200 originB 50 originC 150 originD the model data looks like: IList<Quota> model = new List<Quota>(); model.Add(new Quota{ Amount = 100, Depth = 0, Origin = "originA"); model.Add(new Quota{ Amount = 200, Depth = 0, Origin = "originB"); model.Add(new Quota{ Amount = 50, Depth = 1, Origin = "originC"); model.Add(new Quota{ Amount = 150, Depth = 1, Orinig = "originD"); Editing of the list Then I use Editing a variable length list, ASP.NET MVC 2-style to raise editing of the list. Controller actions QuotaController.cs: public class QuotaController : Controller { // // GET: /Quota/EditList public ActionResult EditList() { IList<Quota> model = // ... assigments as in example above; return View(viewModel); } // // POST: /Quota/EditList [HttpPost] public ActionResult EditList(IList<Quota> quotas) { if (ModelState.IsValid) { // ... save logic return RedirectToAction("Details"); } return View(quotas); // Redisplay the form with errors } // ... other controller actions } View EditList.aspx: <%@ Page Title="" Language="C#" ... Inherits="System.Web.Mvc.ViewPage<IList<Quota>>" %> ... <h2>Edit Quotas</h2> <%=Html.ValidationSummary("Fix errors:") %> <% using (Html.BeginForm()) { foreach (var quota in Model) { Html.RenderPartial("QuotaEditorRow", quota); } %> <% } %> ... Partial View QuotaEditorRow.ascx: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Quota>" %> <div class="quotas" style="margin-left: <%=Model.Depth*45 %>px"> <% using (Html.BeginCollectionItem("Quotas")) { %> <%=Html.HiddenFor(m=>m.Id) %> <%=Html.HiddenFor(m=>m.Depth) %> <%=Html.TextBoxFor(m=>m.Amount, new {@class = "number", size = 5})%> <%=Html.ValidationMessageFor(m=>m.Amount) %> Origin: <%=Html.TextBoxFor(m=>m.Origin)%> <%=Html.ValidationMessageFor(m=>m.Origin) %> ... <% } %> </div> Business rule validation How do I implement validation of business rule: Amount of quota and sum of amounts of nested quotas should equal (e.a. 200 = 50 + 150 in example)? I want to appropriate inputs Html.TextBoxFor(m=>m.Amount) be highlighted red if the rule is broken for it. In example if user enters not 200, but 201 - it should be red on submit. Using sever validation only. Thanks a lot for any advise.

    Read the article

  • How do I use constructor dependency injection to supply Models from a collection to their ViewModels

    - by GraemeF
    I'm using constructor dependency injection in my WPF application and I keep running into the following pattern, so would like to get other people's opinion on it and hear about alternative solutions. The goal is to wire up a hierarchy of ViewModels to a similar hierarchy of Models, so that the responsibility for presenting the information in each model lies with its own ViewModel implementation. (The pattern also crops up under other circumstances but MVVM should make for a good example.) Here's a simplified example. Given that I have a model that has a collection of further models: public interface IPerson { IEnumerable<IAddress> Addresses { get; } } public interface IAddress { } I would like to mirror this hierarchy in the ViewModels so that I can bind a ListBox (or whatever) to a collection in the Person ViewModel: public interface IPersonViewModel { ObservableCollection<IAddressViewModel> Addresses { get; } void Initialize(); } public interface IAddressViewModel { } The child ViewModel needs to present the information from the child Model, so it's injected via the constructor: public class AddressViewModel : IAddressViewModel { private readonly IAddress _address; public AddressViewModel(IAddress address) { _address = address; } } The question is, what is the best way to supply the child Model to the corresponding child ViewModel? The example is trivial, but in a typical real case the ViewModels have more dependencies - each of which has its own dependencies (and so on). I'm using Unity 1.2 (although I think the question is relevant across the other IoC containers), and I am using Caliburn's view strategies to automatically find and wire up the appropriate View to a ViewModel. Here is my current solution: The parent ViewModel needs to create a child ViewModel for each child Model, so it has a factory method added to its constructor which it uses during initialization: public class PersonViewModel : IPersonViewModel { private readonly Func<IAddress, IAddressViewModel> _addressViewModelFactory; private readonly IPerson _person; public PersonViewModel(IPerson person, Func<IAddress, IAddressViewModel> addressViewModelFactory) { _addressViewModelFactory = addressViewModelFactory; _person = person; Addresses = new ObservableCollection<IAddressViewModel>(); } public ObservableCollection<IAddressViewModel> Addresses { get; private set; } public void Initialize() { foreach (IAddress address in _person.Addresses) Addresses.Add(_addressViewModelFactory(address)); } } A factory method that satisfies the Func<IAddress, IAddressViewModel> interface is registered with the main UnityContainer. The factory method uses a child container to register the IAddress dependency that is required by the ViewModel and then resolves the child ViewModel: public class Factory { private readonly IUnityContainer _container; public Factory(IUnityContainer container) { _container = container; } public void RegisterStuff() { _container.RegisterInstance<Func<IAddress, IAddressViewModel>>(CreateAddressViewModel); } private IAddressViewModel CreateAddressViewModel(IAddress model) { IUnityContainer childContainer = _container.CreateChildContainer(); childContainer.RegisterInstance(model); return childContainer.Resolve<IAddressViewModel>(); } } Now, when the PersonViewModel is initialized, it loops through each Address in the Model and calls CreateAddressViewModel() (which was injected via the Func<IAddress, IAddressViewModel> argument). CreateAddressViewModel() creates a temporary child container and registers the IAddress model so that when it resolves the IAddressViewModel from the child container the AddressViewModel gets the correct instance injected via its constructor. This seems to be a good solution to me as the dependencies of the ViewModels are very clear and they are easily testable and unaware of the IoC container. On the other hand, performance is OK but not great as a lot of temporary child containers can be created. Also I end up with a lot of very similar factory methods. Is this the best way to inject the child Models into the child ViewModels with Unity? Is there a better (or faster) way to do it in other IoC containers, e.g. Autofac? How would this problem be tackled with MEF, given that it is not a traditional IoC container but is still used to compose objects?

    Read the article

  • What is wrong with my SQL syntax here?

    - by CT
    I'm trying to create a IT asset database with a web front end. I've gathered some data from forms using POST as well as one variable that had already written to a cookie. This is the first time I have tried to enter the data into the database. Here is the code: <?php //get data $id = $_POST['id']; $company = $_POST['company']; $location = $_POST['location']; $purchase_date = $_POST['purchase_date']; $purchase_order = $_POST['purchase_order']; $value = $_POST['value']; $type = $_COOKIE["type"]; $notes = $_POST['notes']; $manufacturer = $_POST['manufacturer']; $model = $_POST['model']; $warranty = $_POST['warranty']; //set cookies setcookie('id', $id); setcookie('company', $company); setcookie('location', $location); setcookie('purchase_date', $purchase_date); setcookie('purchase_order', $purchase_order); setcookie('value', $value); setcookie('type', $type); setcookie('notes', $notes); setcookie('manufacturer', $manufacturer); setcookie('model', $model); setcookie('warranty', $warranty); //checkdata //start database interactions // connect to mysql server and database "asset_db" mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); // Insert a row of information into the table "asset" mysql_query("INSERT INTO asset (id, company, location, purchase_date, purchase_order, value, type, notes) VALUES('$id', '$company', '$location', '$purchase_date', $purchase_order', '$value', '$type', '$notes') ") or die(mysql_error()); echo "Asset Added"; // Insert a row of information into the table "server" mysql_query("INSERT INTO server (id, manufacturer, model, warranty) VALUES('$id', '$manufacturer', '$model', '$warranty') ") or die(mysql_error()); echo "Server Added"; //destination url //header("Location: verify_submit_server.php"); ?> The error I get is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '', '678 ', 'Server', '789')' at line 2 That data is just test data I was trying to throw in there, but it looks to be the at the $value, $type, $notes. Here are the table create statements if they help: <?php // connect to mysql server and database "asset_db" mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); // create asset table mysql_query("CREATE TABLE asset( id VARCHAR(50) PRIMARY KEY, company VARCHAR(50), location VARCHAR(50), purchase_date VARCHAR(50), purchase_order VARCHAR(50), value VARCHAR(50), type VARCHAR(50), notes VARCHAR(200))") or die(mysql_error()); echo "Asset Table Created.</br />"; // create software table mysql_query("CREATE TABLE software( id VARCHAR(50) PRIMARY KEY, software VARCHAR(50), license VARCHAR(50))") or die(mysql_error()); echo "Software Table Created.</br />"; // create laptop table mysql_query("CREATE TABLE laptop( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), serial_number VARCHAR(50), esc VARCHAR(50), user VARCHAR(50), prev_user VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Laptop Table Created.</br />"; // create desktop table mysql_query("CREATE TABLE desktop( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), serial_number VARCHAR(50), esc VARCHAR(50), user VARCHAR(50), prev_user VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Desktop Table Created.</br />"; // create server table mysql_query("CREATE TABLE server( id VARCHAR(50) PRIMARY KEY, manufacturer VARCHAR(50), model VARCHAR(50), warranty VARCHAR(50))") or die(mysql_error()); echo "Server Table Created.</br />"; ?> Running a standard LAMP stack on Ubuntu 10.04. Thank you.

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >