Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 497/2683 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • RabbitMQ as a proxy between a data store and a producer ?

    - by hyperboreean
    I have some code that produces lots of data that should be stored in the database. The problem is that the database can't keep with the data that it gets produced. So I am wondering whether some kind of queuing mechanism would help in this situation - I am thinking in particular at RabiitMQ and whether is feasible to have the data stored in its queues until some consumer gets the data out of it and pushes it to the database. Also, I am not particular interested whether that data made it to the database or not because pretty soon, the same data will be updated.

    Read the article

  • MVC 3 AdditionalMetadata Attribute with ViewBag to Render Dynamic UI

    - by Steve Michelotti
    A few months ago I blogged about using Model metadata to render a dynamic UI in MVC 2. The scenario in the post was that we might have a view model where the questions are conditionally displayed and therefore a dynamic UI is needed. To recap the previous post, the solution was to use a custom attribute called [QuestionId] in conjunction with an “ApplicableQuestions” collection to identify whether each question should be displayed. This allowed me to have a view model that looked like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [QuestionId("NB0021")] 4: public string FirstName { get; set; } 5: 6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [QuestionId("NB0022")] 9: public string LastName { get; set; } 10: 11: [UIHint("ScalarQuestion")] 12: [QuestionId("NB0023")] 13: public int Age { get; set; } 14: 15: public IEnumerable<string> ApplicableQuestions { get; set; } At the same time, I was able to avoid repetitive IF statements for every single question in my view: 1: <%: Html.EditorFor(m => m.FirstName, new { applicableQuestions = Model.ApplicableQuestions })%> 2: <%: Html.EditorFor(m => m.LastName, new { applicableQuestions = Model.ApplicableQuestions })%> 3: <%: Html.EditorFor(m => m.Age, new { applicableQuestions = Model.ApplicableQuestions })%> by creating an Editor Template called “ScalarQuestion” that encapsulated the IF statement: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <%@ Import Namespace="DynamicQuestions.Models" %> 3: <%@ Import Namespace="System.Linq" %> 4: <% 5: var applicableQuestions = this.ViewData["applicableQuestions"] as IEnumerable<string>; 6: var questionAttr = this.ViewData.ModelMetadata.ContainerType.GetProperty(this.ViewData.ModelMetadata.PropertyName).GetCustomAttributes(typeof(QuestionIdAttribute), true) as QuestionIdAttribute[]; 7: string questionId = null; 8: if (questionAttr.Length > 0) 9: { 10: questionId = questionAttr[0].Id; 11: } 12: if (questionId != null && applicableQuestions.Contains(questionId)) { %> 13: <div> 14: <%: Html.Label("") %> 15: <%: Html.TextBox("", this.Model)%> 16: </div> 17: <% } %> You might want to go back and read the full post in order to get the full context. MVC 3 offers a couple of new features that make this scenario more elegant to implement. The first step is to use the new [AdditionalMetadata] attribute which, so far, appears to be an under appreciated new feature of MVC 3. With this attribute, I don’t need my custom [QuestionId] attribute anymore - now I can just write my view model like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [AdditionalMetadata("QuestionId", "NB0021")] 4: public string FirstName { get; set; } 5:   6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [AdditionalMetadata("QuestionId", "NB0022")] 9: public string LastName { get; set; } 10:   11: [UIHint("ScalarQuestion")] 12: [AdditionalMetadata("QuestionId", "NB0023")] 13: public int Age { get; set; } Thus far, the documentation seems to be pretty sparse on the AdditionalMetadata attribute. It’s buried in the Other New Features section of the MVC 3 home page and, after showing the attribute on a view model property, it just says, “This metadata is made available to any display or editor template when a product view model is rendered. It is up to you to interpret the metadata information.” But what exactly does it look like for me to “interpret the metadata information”? Well, it turns out it makes the view much easier to work with. Here is the re-implemented ScalarQuestion template updated for MVC 3 and Razor: 1: @{ 2: object questionId; 3: ViewData.ModelMetadata.AdditionalValues.TryGetValue("QuestionId", out questionId); 4: if (ViewBag.applicableQuestions.Contains((string)questionId)) { 5: <div> 6: @Html.LabelFor(m => m) 7: @Html.TextBoxFor(m => m) 8: </div> 9: } 10: } So we’ve gone from 17 lines of code (in the MVC 2 version) to about 7-8 lines of code here. The first thing to notice is that in MVC 3 we now have a property called “AdditionalValues” that hangs off of the ModelMetadata property. This is automatically populated by any [AdditionalMetadata] attributes on the property. There is no more need for me to explicitly write Reflection code to GetCustomAttributes() and then check to see if those attributes were present. I can just call TryGetValue() on the dictionary to see if they were present. Secondly, the “applicableQuestions” anonymous type that I passed in from the calling view – in MVC 3 I now have a dynamic ViewBag property where I can just “dot into” the applicableQuestions with a nicer syntax than dictionary square bracket syntax. And there’s no problems calling the Contains() method on this dynamic object because at runtime the DLR has resolved that it is a generic List<string>. At this point you might be saying that, yes the view got much nicer than the MVC 2 version, but my view model got slightly worse.  In the previous version I had a nice [QuestionId] attribute but now, with the [AdditionalMetadata] attribute, I have to type the string “QuestionId” for every single property and hope that I don’t make a typo. Well, the good news is that it’s easy to create your own attributes that can participate in the metadata’s additional values. The key is that the attribute must implement that IMetadataAware interface and populate the AdditionalValues dictionary in the OnMetadataCreated() method: 1: public class QuestionIdAttribute : Attribute, IMetadataAware 2: { 3: public string Id { get; set; } 4:   5: public QuestionIdAttribute(string id) 6: { 7: this.Id = id; 8: } 9:   10: public void OnMetadataCreated(ModelMetadata metadata) 11: { 12: metadata.AdditionalValues["QuestionId"] = this.Id; 13: } 14: } This now allows me to encapuslate my “QuestionId” string in just one place and get back to my original attribute which can be used like this: [QuestionId(“NB0021”)]. The [AdditionalMetadata] attribute is a powerful and under-appreciated new feature of MVC 3. Combined with the dynamic ViewBag property, you can do some really interesting things with your applications with less code and ceremony.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Validate MVC 2 form using Data annotations and Linq-to-SQL, before the model binder kicks in (with D

    - by Stefanvds
    I'm using linq to SQL and MVC2 with data annotations and I'm having some problems on validation of some types. For example: [DisplayName("Geplande sessies")] [PositiefGeheelGetal(ErrorMessage = "Ongeldige ingave. Positief geheel getal verwacht")] public string Proj_GeplandeSessies { get; set; } This is an integer, and I'm validating to get a positive number from the form. public class PositiefGeheelGetalAttribute : RegularExpressionAttribute { public PositiefGeheelGetalAttribute() : base(@"\d{1,7}") { } } Now the problem is that when I write text in the input, I don't get to see THIS error, but I get the errormessage from the modelbinder saying "The value 'Tomorrow' is not valid for Geplande sessies." The code in the controller: [HttpPost] public ActionResult Create(Projecten p) { if (ModelState.IsValid) { _db.Projectens.InsertOnSubmit(p); _db.SubmitChanges(); return RedirectToAction("Index"); } else { SelectList s = new SelectList(_db.Verbonds, "Verb_ID", "Verb_Naam"); ViewData["Verbonden"] = s; } return View(); } What I want is being able to run the Data Annotations before the Model binder, but that sounds pretty much impossible. What I really want is that my self-written error messages show up on the screen. I have the same problem with a DateTime, which i want the users to write in the specific form 'dd/MM/yyyy' and i have a regex for that. but again, by the time the data-annotations do their job, all i get is a DateTime Object, and not the original string. So if the input is not a date, the regex does not even run, cos the data annotations just get a null, cos the model binder couldn't make it to a DateTime. Does anyone have an idea how to make this work?

    Read the article

  • How to use NInject (or other DI / IoC container) with the model binder in ASP.NET MVC 2 ?

    - by Andrei Rinea
    Let's say I have an User entity and I would want to set it's CreationTime property in the constructor to DateTime.Now. But being a unit test adopter I don't want to access DateTime.Now directly but use an ITimeProvider : public class User { public User(ITimeProvider timeProvider) { // ... this.CreationTime = timeProvider.Now; } // ..... } public interface ITimeProvider { public DateTime Now { get; } } public class TimeProvider : ITimeProvider { public DateTime Now { get { return DateTime.Now; } } } I am using NInject 2 in my ASP.NET MVC 2.0 application. I have a UserController and two Create methods (one for GET and one for POST). The one for GET is straight forward but the one for POST is not so straight and not so forward :P because I need to mess with the model binder to tell it to get a reference of an implementation of ITimeProvider in order to be able to construct an user instance. public class UserController : Controller { [HttpGet] public ViewResult Create() { return View(); } [HttpPost] public ActionResult Create(User user) { // ... } } I would also like to be able to keep all the features of the default model binder. Any chance to solve this simple/elegant/etc? :D

    Read the article

  • How to tell what name RIA Services/EF Model uses for Associations?

    - by Nick Gotch
    Hi, I'm working on a C#.NET 3.5 WCF RIA Services app and having an issue with my Entity Framework model. My entity Foo is mapped to a DB table and has a primary key called FooId. My Bar is mapped to a DB view. I've selectively designed this view to generate a composite key in the EF using two of the columns (by making sure they were non-nullable and the others are all nullable. This was done using NULLIF and ISNULL in the view design.) I'm able to add this view to the model with no problem but I keep running into an issue when I try to map an association between the two. Foo should contain many Bars but I keep getting the following error when I add the association: Unable to retrieve AssociationType for association 'FK_Bar_Foo' According to this page, it looks like this might work if I can properly name the association (since RIA Services looks for specific names.) I've tried several variants of names that match the pattern of other associations with no success. Does anyone know if there's a place I can look to find out what name it's looking for? Thanks,

    Read the article

  • Combine 3 select fields and validate as one in my User model in ruby on rails 3

    - by Psychonetics
    Ok I have 3 select boxes for selecting date of birth. I have constants setup in my User model to provide months, years etc.. Anyway I can successfully validate these select boxes separately. What I want to do is combine the :day, :month and :year and store in :birthday and validate the whole date as one so I can return 1 error rather than 3 separate ones. Also doing this will make it easier to store the validated date in my birthday field in my database. Part of my form <td> <%= f.input :day, :required => false, :label => "Birthday: " , :prompt => "Day", :collection => User::DAYS %></td> <td> <%= f.input :month, :label => false, :prompt => "Month", :collection => User::MONTHS %> </td> <td> <%= f.input :year, :label => false, :prompt => "Year", :collection => User::YEAR_RANGE %> </td> Part of User model MONTHS = ["January", 1], ["February", 2], ["March", 3], ["April", 4], ["May", 5], ["June", 6], ["July", 7], ["August", 8], ["September", 9], ["October", 10], ["November", 11], ["December", 12] # finish this DAYS = 1..31 # finish this START_YEAR = Time.now.year - 106 END_YEAR = Time.now.year YEAR_RANGE = START_YEAR..END_YEAR class User < ActiveRecord::Base attr_accessor :day, :month, :year validates_presence_of :day, :message = 'What day in a month was you born?' validates_presence_of :month, :message = 'What month was you born?' validates_presence_of :year, :message = 'What is your year of birth?' end

    Read the article

  • How do I get the collection of Model State Errors in ASP.NET MVC?

    - by Ryan Montgomery
    How do I get the collection of errors in a view? I don't want to use the Html Helper Validation Summary or Validation Message. Instead I want to check for errors and if any display them in specific format. Also on the input controls I want to check for a specific property error and add a class to the input. P.S. I'm using the Spark View Engine but the idea should be the same. So I figured I could do something like... <if condition="${ModelState.Errors.Count > 0}"> DispalyErrorSummary() </if> ....and also... <input type="text" value="${Model.Name}" class="?{ModelState.Errors["Name"] != string.empty} error" /> .... Or something like that. UPDATE My final solution looked like this: <input type="text" value="${ViewData.Model.Name}" class="text error?{!ViewData.ModelState.IsValid && ViewData.ModelState["Name"].Errors.Count() > 0}" id="Name" name="Name" /> This only adds the error css class if this property has an error.

    Read the article

  • Can I give a different id to define Cakephp model associations?

    - by gacrux
    I have an one to many association in which a Thing can have many Statuses defined as below: Status Model: class Status extends AppModel { var $name = 'Status'; var $belongsTo = array( 'Thing' => array( 'className' => 'Thing', 'foreignKey' => 'thing_id', ); } Thing Model: class Thing extends AppModel { var $name = 'Thing'; var $belongsTo = array( // other associations ); var $hasMany = array( 'Status' => array( 'className' => 'Status', 'foreignKey' => 'thing_id', 'dependent' => false, 'order' => 'datetime DESC', 'limit' => '10', ), // other associations ); } This works OK, but I would like Thing to use a different id to connect to Status. E.g. Thing would use 'id' for all of it's other associations but use 'thing_status_id' for it's Status association. How can I best do this?

    Read the article

  • How do I so a select input for a STI column in a Rails model?

    - by James A. Rosen
    I have a model with single-table inheritance on the type column: class Pet < ActiveRecord::Base TYPES = [Dog, Cat, Hamster] validates_presence_of :name end I want to offer a <select> dropdown on the new and edit pages: <% form_for @model do |f| %> <%= f.label :name %> <%= f.text_input :name %> <%= f.label :type %> <%= f.select :type, Pet::TYPES.map { |t| [t.human_name, t.to_s] } %> <% end %> That gives me the following error: ActionView::TemplateError (wrong argument type String (expected Module)) I read a suggestion to use an alias for the field #type since Ruby considers that a reserved word that's the same as #class. I tried both class Pet < ActiveRecord::Base ... alias_attribute :klass, :type end and class Pet < ActiveRecord::Base ... def klass self.type end def klass=(k) self.type = k end end Neither worked. Any suggestions? Oddly, it works fine on my machine (MRI 1.8.6 on RVM), but fails on the staging server (MRI 1.8.7 not on RVM).

    Read the article

  • Should I remove all inheritance from my model in order to work with ria services?

    - by TimothyP
    I've posted some questions on this before, but it's different. So consider a small portion of our model: Person Customer Employee Spouse Person is the base class which has 3 classes that inherit from it. These 4 are very central in our design and link to many other entities. I could solve all the problems I'm experiencing with ria-services by removing the inheritance but that would really increase the complexety of the model. The first problem I experienced was that I couldn't query for Customers, Employees or Spouses, but someone gave me a solution, which was to add something like this to the DomainService: public IQueryable<Employee> GetEmployees() { return this.ObjectContext.People.OfType<Employee>(); } public IQueryable<Customer> GetCustomers() { return this.ObjectContext.People.OfType<Customer>(); } public IQueryable<Spouse> GetSpouses() { return this.ObjectContext.People.OfType<Spouse>(); } Next I tried something that seemed very normal to me: var employee = new Employee() { //.... left out to reduce the length of this question }; var spouse = new Spouse() { //.... left out to reduce the length of this questions }; employee.Spouse = spouse; context.People.Add(spouse); context.People.Add(employee); context.SubmitChanges(); Then I get the following exception: Cannot retrieve an entity set for the derived entity type 'Spouse'. Use EntityContainer.GetEntitySet(Type) to get the entity set for the base entity type 'Person'. Even when the spouse is already in the database, and I retreive it first I get similar exceptions. Also note that for some reason in some places "Persons" is used instead of "People"... So how do I solve this problem, what am I doing wrong and will I keep running into walls when using ria services with inheritance? I found some references on the web, all saying it works and then some DomainService code in which they suposedly changed something but no details... I'm using VS2010 RC1 + Silveright 4

    Read the article

  • How can I use a different id field for one of my CakePHP model associations?

    - by gacrux
    I have an one to many association in which a Thing can have many Statuses defined as below: Status Model: class Status extends AppModel { var $name = 'Status'; var $belongsTo = array( 'Thing' => array( 'className' => 'Thing', 'foreignKey' => 'thing_id', ); } Thing Model: class Thing extends AppModel { var $name = 'Thing'; var $belongsTo = array( // other associations ); var $hasMany = array( 'Status' => array( 'className' => 'Status', 'foreignKey' => 'thing_id', 'dependent' => false, 'order' => 'datetime DESC', 'limit' => '10', ), // other associations ); } This works OK, but I would like Thing to use a different id to connect to Status. E.g. Thing would use 'id' for all of it's other associations but use 'thing_status_id' for it's Status association. How can I best do this?

    Read the article

  • How do I make a grouped select box grouped by a column for a given model in Formtastic for Rails?

    - by jklina
    In my Rails project I'm using Formtastic to manage my forms. I have a model, Tags, with a column, "group". The group column is just a simple hardcoded way to organize my tags. I will post my Tag model class so you can see how it's organized class Tag < ActiveRecord::Base class Group BRAND = 1 SEASON = 2 OCCASION = 3 CONDITION = 4 SUBCATEGORY = 5 end has_many :taggings, :dependent => :destroy has_many :plaggs, :through => :taggings has_many :monitorings, :as => :monitorizable validates_presence_of :name, :group validates_uniqueness_of :name, :case_sensitive => false def self.brands(options = {}) self.all({ :conditions => { :group => Group::BRAND } }.merge(options)) end def self.seasons(options = {}) self.all({ :conditions => { :group => Group::SEASON } }.merge(options)) end def self.occasions(options = {}) self.all({ :conditions => { :group => Group::OCCASION } }.merge(options)) end def self.conditions(options = {}) self.all({ :conditions => { :group => Group::CONDITION } }.merge(options)) end def self.subcategories(options = {}) self.all({ :conditions => { :group => Group::SUBCATEGORY } }.merge(options)) end def self.non_brands(options = {}) self.all({ :conditions => [ "`group` != ? AND `group` != ?", Tag::Group::SUBCATEGORY, Tag::Group::BRAND] }.merge(options)) end end My goal is to use Formtastic to provide a grouped multiselect box, grouped by the column, "group" with the tags that are returned from the non_brands method. I have tried the following: = f.input :tags, :required => false, :as => :select, :input_html => { :multiple => true }, :collection => tags, :selected => sel_tags, :group_by => :group, :prompt => false But I receive the following error: (undefined method `klass' for nil:NilClass) Any ideas where I'm going wrong? Thanks for looking :]

    Read the article

  • How to convert model into url properly in asp.net MVC?

    - by 4eburek
    From the SEO standpoint it is nice to see urls in format which explains what is located on a page Let's have a look on such situation (it is just example) We need to display page about some user and decided to have such url template for that page: /user/{UserId}/{UserCountry}/{UserLogin}. And create for this purpose such model public class UserUrlInfo{ public int UserId{get;set;} public string UserCountry{get;set;} public string UserLogin{get;set;} } I want to create controller method where I pass UserUrlInfo object but not all required fields. Classic controller method for url template shown above is following public ActionResult Index(int UserId, string UserCountry, string UserLogin){ return View(); } and we need to call it like that Html.ActionLink<UserController>(x=>Index(user.UserId, user.UserCountry, user.UserLogin), "See user page") I want to create such controller method public ActionResult Index(UserUrlInfo userInfo){ return View(); } and call it like that: Html.ActionLink<UserController>(x=>Index(user), "See user page") Actually I works when we add one more route and point it to the same controller method, so routing will be: /user/{userInfo} /user/{UserId}/{UserCountry}/{UserLogin} In this situation routing engine gets string method of our model (need to override it) and it works ALMOST always. But sometimes it fails and show url like /page/?userInfo=/US/John So my workaround does not always work properly. Does anybody know how to work with urls in such way?

    Read the article

  • Should my validator have access to my entire model?

    - by wb
    As the title states I'm wondering if it's a good idea for my validation class to have access to all properties from my model. Ideally, I would like to do that because some fields require 10+ other fields to verify whether it is valid or not. I could but would rather not have functions with 10+ parameters. Or would that make the model and validator too coupled with one another? Here is a little example of what I mean. This code however does not work because it give an infinite loop! Class User Private m_UserID Private m_Validator Public Sub Class_Initialize() End Sub Public Property Let Validator(value) Set m_Validator = value m_Validator.Initialize(Me) End Property Public Property Get Validator() Validator = m_Validator End Property Public Property Let UserID(value) m_UserID = value End property Public Property Get UserID() UserID = m_Validator.IsUserIDValid() End property End Class Class Validator Private m_User Public Sub Class_Initialize() End Sub Public Sub Initialize(value) Set m_User = value End Sub Public Function IsUserIDValid() IsUserIDValid = m_User.UserID > 13 End Function End Class Dim mike : Set mike = New User mike.UserID = 123456 mike.Validator = New Validator Response.Write mike.UserID If I'm right and it is a good idea, how can I go a head and fix the infinite loop with the get property UserID? Thank you.

    Read the article

  • WPF - How do I use the UserControl with a dependency property and view model?

    - by user320849
    Hello, My goal is to have a user select a year and a month. Translate the selection into a date and have the user control send the date back to my view model. That part works for me....However, I cannot get the ViewModel's initial date to set those drop downs. public static readonly DependencyProperty Date = DependencyProperty.Register("ReturnDate", typeof(DateTime), typeof(DatePicker), new FrameworkPropertyMetadata{BindsTwoWayByDefault = true,}); public DateTime ReturnDate { get { return Convert.ToDateTime(GetValue(Date)); } set { SetDropDowns(value); SetValue(Date, value); } } The SetDropDowns(value) just sets the selected items on the combo boxes, however, the program never makes it to that method. On the view I am using: <cc1:DatePicker ReturnDate="{Binding Path=StartDate, Mode=TwoWay}" IsStart="True" /> If this has been answered, then my bad. I looked around and didn't see anything that worked for me. Thus, when the program loads how do I get the value from the view model to a method in order to set the combo boxes? Thanks, -Scott

    Read the article

  • Resizing a GLJPanel with JOGL causes my model to disappear.

    - by badcodenotreat
    I switched over to using a GLJPanel from a GLCanvas to avoid certain flickering issues, however this has created several unintended consequences of it's own. From what I've gleaned so far, GLJPanel calls GLEventListener.init() every time it's resized which either resets various openGL functions i've enabled in init() (depth test, lighting, etc...) if i'm lucky, or completely obliterates my model if i'm not. I've tried debugging it but I'm not able to correct this behavior. This is my init() function: gl.glShadeModel( GL.GL_SMOOTH ); gl.glEnable( GL.GL_DEPTH_TEST ); gl.glDepthFunc( GL.GL_LEQUAL ); gl.glDepthRange( zNear, zFar ); gl.glDisable( GL.GL_LINE_SMOOTH ); gl.glEnable(GL.GL_NORMALIZE); gl.glEnable( GL.GL_BLEND ); gl.glBlendFunc( GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA ); // set up the background color gl.glClearColor( ((float)backColor.getRed () / 255.0f), ((float)backColor.getGreen() / 255.0f), ((float)backColor.getBlue () / 255.0f), 1.0f); gl.glEnable ( GL.GL_LIGHTING ); gl.glLightfv( GL.GL_LIGHT0, GL.GL_AMBIENT, Constants.AMBIENT_LIGHT, 0 ); gl.glLightfv( GL.GL_LIGHT0, GL.GL_DIFFUSE, Constants.DIFFUSE_LIGHT, 0 ); gl.glEnable ( GL.GL_LIGHT0 ); gl.glTexEnvf( GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_MODULATE ); gl.glHint( GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_NICEST ); // code to generate model Is there any way around this other than removing everything from init(), adding it to my display() function? Given the behavior of init() and reshape() for GLJPanel, i'm not sure if that will fix it either.

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • rails + paperclip: Is a generic "Attachment" model a good idea?

    - by egarcia
    On my application I've several things with attachments on them, using paperclip. Clients have one logo. Stores can have one or more pictures. These pictures, in addition, can have other information such as the date in which they were taken. Products can have one or more pictures of them, categorized (from the font, from the back, etc). For now, each one of my Models has its own "paperclip-fields" (Client has_attached_file) or has_many models that have attached files (Store has_many StorePictures, Product has_many ProductPictures) My client has also told me that in the future we might be adding more attachments to the system (i.e. pdf documents for the clients to download). My application has a rather complex authorization system implemented with declarative_authorization. One can not, for example, download pictures from a product he's not allowed to 'see'. I'm considering re-factoring my code so I can have a generic "Attachment" model. So any model can has_many :attachments. With this context, does it sound like a good idea? Or should I continue making Foos and FooPictures?

    Read the article

  • XSL(like) declarative language as MVC view over strongtyped model?

    - by Martin Kool
    As a huge XSL fan, I am very happy to use xsl as the view in our proprietary MVC framework on ASP.NET. Objects in the model are serialized under the hood using .NET's xml serializer, and we use quite atomic xsl templates to declare how each object or property should transform. For example: <xsl:template match="/Article"> <html> <body> <div class="article"> <xsl:apply-templates /> </div> </body> </html> </xsl:template> <xsl:template match="Article/Title"> <h1> <xsl:apply-templates /> </h1> </xsl:template> <xsl:template match="@*|text()"> <xsl:copy /> </xsl:template> This mechanism allows us to quickly override default matching templates, like having a template matching on the last item in a list, or the selected one, etc. Also, xsl extension objects in .NET allow us just the bit of extra grip that we need. Common shared templates can be split up and included. However Even though I can ignore the verbosity downside of xsl (because Visual Studio schema intellisense + snippets really is slick, praise to the VS-team), the downside of not having intellisense over strongtyped objects in the model is really something that's bugging me. I've seen ASP.NET MVC + user controls in action and really starting to love it, but I wonder; Is there a way of getting some sort of intellisense over XML that we're iterating over, or do you know of a language that offers the freedom and declarativeness of XSL but has the strongtype/intellisense benefits of say webforms/usercontrols/asp.net.mvc-view? (I probably know the answer: "no", and I'll find myself using Phil Haack's utterly cool mvc shizzle soon...)

    Read the article

  • Rails - HABTM Relationship -- How Can I Find A Record Based On An Attribute Of The Associated Model

    - by ChrisWesAllen
    I have setup this HABTM relationship in the past and its worked before....Now it isnt and I'm at my wits end trying to figure out whats wrong. I've looking through the rails guides all day and cant seem to figure out what I'm doing wrong, so help would really be appreciated. I have 2 models connected through a join model and I'm trying to find records based an attribute of the associated model. Event.rb has_and_belongs_to_many :interests Interest.rb has_and_belongs_to_many :events and a join table migration that was created like create_table 'events_interests', :id => false do |t| t.column :event_id, :integer t.column :interest_id, :integer end I tried @events = Event.all(:include => :interest, :conditions => [" interest.id = ?", 4 ] ) But got the error "Association named 'interest' was not found; perhaps you misspelled it?"... which I didnt of course I tried @events = Event.interests.find(:all, :conditions => [" interest.id = ?", 4 ] ) but got the error "undefined method `interests' for #Class:0x4383348" How can I find the Events that have an interest id of 4....I'm definitely going bald from this lol

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • Pie Charts Just Don't Work When Comparing Data - Number 10 of Top 10 Reasons to Never Ever Use a Pie

    - by Tony Wolfram
    When comparing data, which is what a pie chart is for, people have a hard time judging the angles and areas of the multiple pie slices in order to calculate how much bigger one slice is than the others. Pie Charts Don't Work A slice of pie is good for serving up a portion of desert. It's not good for making a judgement about how big the slice is, what percentage of 100 it is, or how it compares to other slices. People have trouble comparing angles and areas to each other. Controlled studies show that people will overestimate the percentage that a pie slice area represents. This is because we have trouble calculating the area based on the space between the two angles that define the slice. This picture shows how a pie chart is useless in determing the largest value when you have to compare pie slices.   You can't compare angles and slice areas to each other. Human perception and cognition is poor when viewing angles and areas and trying to make a mental comparison. Pie charts overload the working memory, forcing the person to make complicated calculations, and at the same time make a decision based on those comparisons. What's the point of showing a pie chart when you want to compare data, except to say, "well, the slices are almost the same, but I'm not really sure which one is bigger, or by how much, or what order they are from largest to smallest. But the colors sure are pretty. Plus, I like round things. Oh,was I suppose to make some important business decision? Sorry." Bad Choices and Bad Decisions Interaction Designers, Graphic Artists, Report Builders, Software Developers, and Executives have all made the decision to use pie charts in their reports, software applications, and dashboards. It was a bad decision. It was a poor choice. There are always better options and choices, yet the designer still made the decision to use a pie chart. I'll expore why people make such poor choices in my upcoming blog entires. (Hint: It has more to do with emotions than with analytical thinking.) I've outlined my opinions and arguments about the evils of using pie charts in "Countdown of Top 10 Reasons to Never Ever Use a Pie Chart." Each of my next 10 blog entries will support these arguments with illustrations, examples, and references to studies. But my goal is not to continuously and endlessly rage against the evils of using pie charts. This blog is not about pie charts. This blog is about understanding why designers choose to use a pie chart. Why, when give better alternatives, and acknowledging the shortcomings of pie charts, do designers over and over again still freely choose to place a pie chart in a report? As an extra treat and parting shot, check out the nice pie chart that Wikipedia uses to illustrate the United States population by state.   Remember, somebody chose to use this pie chart, with all its glorious colors, and post it on Wikipedia for all the world to see. My next blog will give you a better alternative for displaying comparable data - the sorted bar chart.

    Read the article

  • To SYNC or not to SYNC – Part 4

    - by AshishRay
    This is Part 4 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In Part 3, I talked in details regarding why Data Guard SYNC is a good thing, and the distance implications you have to keep in mind. In this final article of the series, I will talk about how you can nicely complement Data Guard SYNC with the ability to failover in seconds. Wait - Did I Say “Seconds”? Did I just say that some customers do Data Guard failover in seconds? Yes, Virginia, there is a Santa Claus. Data Guard has an automatic failover capability, aptly called Fast-Start Failover. Initially available with Oracle Database 10g Release 2 for Data Guard SYNC transport mode (and enhanced in Oracle Database 11g to support Data Guard ASYNC transport mode), this capability, managed by Data Guard Broker, lets your Data Guard configuration automatically failover to a designated standby database. Yes, this means no human intervention is required to do the failover. This process is controlled by a low footprint Data Guard Broker client called Observer, which makes sure that the primary database and the designated standby database are behaving like good kids. If something bad were to happen to the primary database, the Observer, after a configurable threshold period, tells that standby, “Your time has come, you are the chosen one!” The standby dutifully follows the Observer directives by assuming the role of the new primary database. The DBA or the Sys Admin doesn’t need to be involved. And - in case you are following this discussion very closely, and are wondering … “Hmmm … what if the old primary is not really dead, but just network isolated from the Observer or the standby - won’t this lead to a split-brain situation?” The answer is No - It Doesn’t. With respect to why-it-doesn’t, I am sure there are some smart DBAs in the audience who can explain the technical reasons. Otherwise - that will be the material for a future blog post. So - this combination of SYNC and Fast-Start Failover is the nirvana of lights-out, integrated HA and DR, as practiced by some of our advanced customers. They have observed failover times (with no data loss) ranging from single-digit seconds to tens of seconds. With this, they support operations in industry verticals such as manufacturing, retail, telecom, Internet, etc. that have the most demanding availability requirements. One of our leading customers with massive cloud deployment initiatives tells us that they know about server failures only after Data Guard has automatically completed the failover process and the app is back up and running! Needless to mention, Data Guard Broker has the integration hooks for interfaces such as JDBC and OCI, or even for custom apps, to ensure the application gets automatically rerouted to the new primary database after the database level failover completes. Net Net? To sum up this multi-part blog article, Data Guard with SYNC redo transport mode, plus Fast-Start Failover, gives you the ideal triple-combo - that is, it gives you the assurance that for critical outages, you can failover your Oracle databases: very fast without human intervention, and without losing any data. In short, it takes the element of risk out of critical IT operations. It does require you to be more careful with your network and systems planning, but as far as HA is concerned, the benefits outweigh the investment costs. So, this is what we in the MAA Development Team believe in. What do you think? How has your deployment experience been? We look forward to hearing from you!

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >