Search Results

Search found 14816 results on 593 pages for 'logical model'.

Page 561/593 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Dealing with missing messages in JavaScript when using BOSH

    - by JamieD
    We recently went into private beta on our flagship product and had a small launch event. Unfortunately the venue had a terrible wireless connection and packets were being dropped left right and centre causing havoc with out system, basically it wasn't able to work at all! Luckily we were able to switch to a different network and rescue the demo. This highlighted something that I knew was already an issue but hadn't appreciated quite how much of an issue it could be. Our system relies heavily on BOSH and has a rather large JavaScript code base which now works rather well under good network conditions. However we need to make it work well under bad network conditions as well. Due to the way that XMPP works, a fire and forget system, it's not easy to tell if a message you sent, or were supposed to receive, was actually sent or received. For instance, we have an offer system, one user will send an offer to another over BOSH. When this message is received by the server a message is published to the offering users offers_sent PEP node and a similar message to the receiving users offers_received PEP node. While the sending user is able to tell if their offer was send (relatively) easily, if the notification to the receiving user is never received that user will never know it missed a message. A little about out JavaScript setup, it has 4 main layers: StropheJS An MVC framework for dealing with low level tasks and to build on top of An application layer which contains the app logic routes, controllers models etc. as well as a browser cache of the model data A UI layer that receives events and publishes events to and from the application layer One way to solve the missing messages issue would be to periodically check the PEP nodes for new data that the browser doesn't know about. If a new message was discovered the browsers cache would be invalidated and all new data would be requested from the server. I'm not sure this is the best way to go and it also doesn't cover all situations. We certainly don't want to get into the situation where we are sending messages to confirm the previous message was received at it's destination as this would double the network traffic. With the number of real time websites growing daily this is an issue that must have been encountered by other developers, it would be interesting to see how it's been solved by others. As far as I can see there are two situations in which messages go missing: On poor connections messages are not sent or received due to the packets being dropped Involving navigating between pages, a message is received by the browser but is not fully processed and stored in the local cache before the page is unloaded. Or a message is added to the send queue but never sent before the page is unloaded I suspect the hardest issue to solve will be number 2. Any thoughts on the subject would be much appreciated.

    Read the article

  • java.util.Map with HtmlDataTable

    - by gerry
    Hi, I'm developing an application on GlassFish v3 which uses Suns-RI of JavaEE6 and JSF2.0, etc. And the bad thing is, that no changes/switches away from Suns RI can be made (to use MyFaces or something like that). Now, the problem is, that I want to build HtmlDatatable by hand ( in Java code). The datatable should represent a java.util.Map where the first column should display the key and the second the values of the map. I've build successfully a PanelGrid from a java.util.List and used every time the "setExpressionValue" methods of UIComponent to bind the UI to the underlying List. But now, this doesn't work with the Map. Here is a snippet of my code: public HtmlDataTable getEntityDetailsDataTable() { ... Application app = FacesContext.getCurrentInstance().getApplication(); HtmlDataTable component = (HtmlDataTable)app.createComponent(HtmlDataTable.COMPONENT_TYPE); component.setValueExpression("value", ExpressionUtil.createValueExpression("#{entityTree.entity."+fieldName+".entrySet()}", Map.class)); component.setVar("param"); UIColumn column = new UIColumn(); UIOutput label1 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[key]}", String.class); column.getChildren().add(label1); UIOutput label2 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[value]}", String.class); column.getChildren().add(label2); component.getChildren().add(column); ... return component; } component.getChildren().add(column); ... return component; } So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. If there is a workaround, to represent the Map as to Lists...please help me, because for this (map as 2 lists) I've no idea how the underlying map/database model can be updated again. Hopefully, someone can help me....

    Read the article

  • Rails Joins and include columns from joins table

    - by seth.vargo
    I don't understand how to get the columns I want from rails. I have two models - A User and a Profile. A User :has_many Profile (because users can revert back to an earlier version of their profile): > DESCRIBE users; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | username | varchar(255) | NO | UNI | NULL | | | password | varchar(255) | NO | | NULL | | | last_login | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+   > DESCRIBE profiles; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_id | int(11) | NO | MUL | NULL | | | first_name | varchar(255) | NO | | NULL | | | last_name | varchar(255) | NO | | NULL | | | . . . . . . | | . . . . . . | | . . . . . . | +----------------+--------------+------+-----+---------+----------------+ In SQL, I can run the query: > SELECT * FROM profiles JOIN users ON profiles.user_id = users.id LIMIT 1; +----+-----------+----------+---------------------+---------+---------------+-----+ | id | username | password | last_login | user_id | first_name | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ | 1 | john | ****** | 2010-12-30 18:04:28 | 1 | John | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ See how I get all the columns for BOTH tables JOINED together? However, when I run this same query in Rails, I don't get all the columns I want - I only get those from Profile: # in rails console >> p = Profile.joins(:user).limit(1) >> [#<Profile ...>] >> p.first_name >> NoMethodError: undefined method `first_name' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in `method_missing' from (irb):8 # I do NOT want to do this (AKA I do NOT want to use "includes") >> p.user >> NoMethodError: undefined method `user' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in method_missing' from (irb):9 I want to (efficiently) return an object that has all the properties of Profile and User together. I don't want to :include the user because it doesn't make sense. The user should always be part of the most recent profile as if they were fields within the Profile model. How do I accomplish this?

    Read the article

  • Setting default radio button on edit

    - by DTown
    So I'm trying to setup scaffolding to use radio buttons for the format button. It definitely works to add a new and edit. The problem is when I go to edit an entry the correct radio button isn't selected by default. <% form_for(@cinema) do |f| %> <%= f.error_messages %> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= f.label :director %><br /> <%= f.text_field :director %> </p> <p> <%= f.label :release_date %><br /> <%= f.date_select :release_date, :start_year => 1900, :end_year => 2010 %> </p> <p> <%= f.label :running_time %><br /> <%= f.text_field :running_time %> </p> <p>Blockquote <%= f.label :format %><br /> <%= f.radio_button :format, "black & white" %> <%= label :format_bw, "Black & White" %> <%= f.radio_button :format, "color" %> <%= label :format_color, "Color" %> </p> <p> <%= f.submit 'Create' %> </p> <% end % Controller def edit @cinema = Cinema.find(params[:id]) end Model class Cinema < ActiveRecord::Base validates_presence_of :title, :on => :create validates_presence_of :title, :on => :update # validates_presence_of :director, :on => :create validates_presence_of :director, :on => :update # validates_presence_of :release_date, :on => :create validates_presence_of :release_date, :on => :update # validates_presence_of :format, :on => :create validates_presence_of :format, :on => :update # validates_presence_of :running_time, :on => :create validates_presence_of :running_time, :on => :update validates_numericality_of :running_time, :on => :create, :on => :update, :less_than_or_equal_to => 300, :greater_than => 0 end

    Read the article

  • Initialization of ComboBox in datagrid, Silverlight 4.0

    - by Budda
    I have datagrid with list of MyPlayer objects linked to ItemsSource, there are ComboBoxes inside of grid that are linked to a list of inner object, and binding works correctly: when I select one of the item then its value is pushed to data model and appropriately updated in other places, where it is used. The only problem: initial selections are not displayed in my ComboBoxes. I don't know why..? Instance of the ViewModel is assigned to view DataContext. Here is grid with ComboBoxes (grid is binded to the SquadPlayers property of ViewModel): <data:DataGrid ="True" AutoGenerateColumns="False" ItemsSource="{Binding SquadPlayers}"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Header="Rig." Width="50"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <ComboBox SelectedItem="{Binding Rigid, Mode=TwoWay}" ItemsSource="{Binding IntLevels, Mode=TwoWay}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> Here is ViewModel class ('_model_DataReceivedEvent' method is called asynchronously, when data are received from server): public class SquadViewModel : ViewModelBase<SquadModel> { public SquadViewModel() { SquadPlayers = new ObservableCollection<SquadPlayer>(); } private void _model_DataReceivedEvent(List<SostavPlayerData> allReadyPlayers) { TeamTask task = new TeamTask { Rigid = 1 }; foreach (SostavPlayerData spd in allReadyPlayers) { SquadPlayer sp = new SquadPlayer(spd, task); SquadPlayers.Add(sp); } RaisePropertyChanged("SquadPlayers"); } And here is SquadPlayer class (it's objects are binded to the grid rows): public class SquadPlayer : INotifyPropertyChanged { public SquadPlayer(SostavPlayerData spd) { _spd = spd; Rigid = 2; } public event PropertyChangedEventHandler PropertyChanged; private int _rigid; public int Rigid { get { return _rigid; } set { _rigid = value; if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs("Rigid")); } } } private readonly ObservableCollection<int> _statIntLevels = new ObservableCollection<int> { 1, 2, 3, 4, 5 }; public ObservableCollection<int> IntLevels { get { return _statIntLevels; } } It is expected to have all "Rigid" comboboxes set to "2" value, but they are not selected (items are in the drop-down list, and if any value is selected it is going to ViewModel). What is wrong with this example? Any help will be welcome. Thanks.

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Html.RadioButtonListFor problem

    - by ognjenb
    <%using (Html.BeginForm("Numbers", "Numbers", FormMethod.Post)) { %> <table id="numbers"> <tr> <th> prvi_br </th> <th> drugi_br </th> <th> treci_br </th> </tr> <%int rb =1; %>" <% foreach (var item in Model) { %> <tr> <td> <%= Html.Encode(item.prvi_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>" /> </td> <td> <%= Html.Encode(item.drugi_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>"/> </td> <td> <%= Html.Encode(item.treci_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>"/> </td> </tr> <% rb++; %> <% } %> </table> <p> <input type="submit" value="Save" /> </p> <%} %> How post this form with only one checked radio button? In my case all of 3 radio buttons is possible to check. How to restrict so that it is possible check only one radio. In this article I found good solutions but it can not be applied because I have a table.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • How to convert a 32bpp image to an indexed format?

    - by Ed Swangren
    So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • Web-Frameworks for Education Management Systems?

    - by Indebi
    So, I'm working on an idea and I'll go into a brief overview of that but my question is, What are some good web frameworks for this situation? I have some experience in the following languages: C# Python I have considerably more experience in C# than Python, however I am expecting to learn new things. My idea is this, a completely web-based community-oriented Education Management System that focuses on making students and teachers day-to-day lives easier. For students it will provide a centralized place for them to do homework, study for tests, and reinforce concepts learned previously in class. For teachers it will give them a centralized place to handle assignments, attendance, homework, tests, and all other major parts of classroom management. All of that, but in a community-oriented fashion. Everything a teacher does is shared and open to constructive criticism, allowing other teachers to use their assignments/tests and for students or other teachers to comment, rate and criticize their assignments. This encourages an environment of openness that will allow teacher's to focus on teaching and student's to focus on learning. And that community wouldn't be limited to one school or school-district, this system would be completely school-independent. Please note that I have no problem with hearing constructive criticism on this idea, however I would prefer if this post was more focused on my question. I have somewhat explored about the following options: Django ASP.NET Ruby on Rails Silverlight (1) I have Django installed and I played with it for a little bit, I really like how easy setting up databases are and how it handles the database completely for you. I don't really know how to use it very well and I don't quite understand the Model-View-Controller paradigm(?) for it yet but I haven't thought about it much. I also like the fact that it uses Python. (2) I don't really like Visual Studio for developing in ASP.NET, I hate the way the web-designer works and it just feels clunky and old. I like the server-side development part though. I don't like how expensive ASP.NET and overall Visual Studio is, even if I do get it for free for now using DreamSpark (3) I haven't been able to explore much with this, I could not get Rails (or maybe Ruby) properly installed. I first installed it within RadRails and that didn't work so I uninstalled RadRails and then installed the latest version of Ruby off the official Windows Installer and then installed Ruby on Rails through gem and even after all that it still didn't work, so I installed Netbeans and attempted to use it there but it still did not work (4) I like Silverlight in some extents, I've played with this one the most, it's very similar to WPF (which I've used the most) in a lot of ways but I don't like how database connectivity works, at least in comparison to Django. I also dislike how expensive everything with Microsoft is, even if I get it for free for now with DreamSpark. I would like to hear some suggestions from experienced web-developers as to what I should use and why, or at least what some good options are for my scenario Your help would be very appreciated

    Read the article

  • How to diagnose failing 6Gbps SATA connection?

    - by whitequark
    I have a Samsung RC530 notebook and OCZ Vertex-3 6Gbps SATA SSD working in AHCI mode. # dmesg | grep DMI SAMSUNG ELECTRONICS CO., LTD. RC530/RC730/RC530/RC730, BIOS 03WD.M008.20110927.PSA 09/27/2011 # lspci -nn 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) # sdparm -a /dev/sda /dev/sda: ATA OCZ-VERTEX3 2.15 At the boot, the following messages are present in dmesg (I am running Debian wheezy @ Linux 3.2.8): # dmesg | grep -iE '(ata|ahci)' [ 5.179783] ahci 0000:00:1f.2: version 3.0 [ 5.179802] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 5.179864] ahci 0000:00:1f.2: irq 42 for MSI/MSI-X [ 5.195424] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x5 impl SATA mode [ 5.195429] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ems apst [ 5.195436] ahci 0000:00:1f.2: setting latency timer to 64 [ 5.204035] scsi0 : ahci [ 5.204301] scsi1 : ahci [ 5.204447] scsi2 : ahci [ 5.204592] scsi3 : ahci [ 5.204682] scsi4 : ahci [ 5.204799] scsi5 : ahci [ 5.204917] ata1: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06100 irq 42 [ 5.204920] ata2: DUMMY [ 5.204923] ata3: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06200 irq 42 [ 5.204924] ata4: DUMMY [ 5.204926] ata5: DUMMY [ 5.204927] ata6: DUMMY [ 5.523039] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 5.525911] ata3.00: ATAPI: TSSTcorp CDDVDW SN-208BB, SC00, max UDMA/100 [ 5.531006] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 5.533703] ata3.00: configured for UDMA/100 [ 5.542790] ata1.00: ATA-8: OCZ-VERTEX3, 2.15, max UDMA/133 [ 5.542800] ata1.00: 117231408 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 5.552751] ata1.00: configured for UDMA/133 [ 5.553050] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX3 2.15 PQ: 0 ANSI: 5 [ 5.559621] scsi 2:0:0:0: CD-ROM TSSTcorp CDDVDW SN-208BB SC00 PQ: 0 ANSI: 5 [ 5.564059] sd 0:0:0:0: [sda] 117231408 512-byte logical blocks: (60.0 GB/55.8 GiB) [ 5.564127] sd 0:0:0:0: [sda] Write Protect is off [ 5.564131] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 5.564158] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 5.564582] sda: sda1 [ 5.564810] sd 0:0:0:0: [sda] Attached SCSI disk [ 5.572006] sr0: scsi3-mmc drive: 16x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 5.572010] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 5.572189] sr 2:0:0:0: Attached scsi CD-ROM sr0 [ 6.717181] ata1.00: exception Emask 0x50 SAct 0x1 SErr 0x280900 action 0x6 frozen [ 6.717238] ata1.00: irq_stat 0x08000000, interface fatal error [ 6.717291] ata1: SError: { UnrecovData HostInt 10B8B BadCRC } [ 6.717342] ata1.00: failed command: READ FPDMA QUEUED [ 6.717395] ata1.00: cmd 60/50:00:20:39:58/00:00:00:00:00/40 tag 0 ncq 40960 in [ 6.717396] res 40/00:00:20:39:58/00:00:00:00:00/40 Emask 0x50 (ATA bus error) [ 6.717503] ata1.00: status: { DRDY } [ 6.717553] ata1: hard resetting link [ 7.033417] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.055234] ata1.00: configured for UDMA/133 [ 7.055262] ata1: EH complete [ 7.147280] ata1.00: exception Emask 0x10 SAct 0xf8 SErr 0x280100 action 0x6 frozen [ 7.147340] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.147393] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.147460] ata1.00: failed command: READ FPDMA QUEUED [ 7.147529] ata1.00: cmd 60/08:18:88:17:41/00:00:02:00:00/40 tag 3 ncq 4096 in [ 7.147531] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147691] ata1.00: status: { DRDY } [ 7.147754] ata1.00: failed command: READ FPDMA QUEUED [ 7.147821] ata1.00: cmd 60/00:20:f8:42:4c/01:00:02:00:00/40 tag 4 ncq 131072 in [ 7.147822] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147977] ata1.00: status: { DRDY } [ 7.148036] ata1.00: failed command: READ FPDMA QUEUED [ 7.148100] ata1.00: cmd 60/50:28:f8:43:4c/00:00:02:00:00/40 tag 5 ncq 40960 in [ 7.148101] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148255] ata1.00: status: { DRDY } [ 7.148315] ata1.00: failed command: READ FPDMA QUEUED [ 7.148379] ata1.00: cmd 60/00:30:50:98:64/01:00:02:00:00/40 tag 6 ncq 131072 in [ 7.148380] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148534] ata1.00: status: { DRDY } [ 7.148593] ata1.00: failed command: READ FPDMA QUEUED [ 7.148657] ata1.00: cmd 60/00:38:50:99:64/01:00:02:00:00/40 tag 7 ncq 131072 in [ 7.148658] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148813] ata1.00: status: { DRDY } [ 7.148875] ata1: hard resetting link [ 7.464842] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.486794] ata1.00: configured for UDMA/133 [ 7.486822] ata1: EH complete [ 7.546395] ata1.00: exception Emask 0x10 SAct 0x2f SErr 0x280100 action 0x6 frozen [ 7.546470] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.546531] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.546588] ata1.00: failed command: READ FPDMA QUEUED [ 7.546648] ata1.00: cmd 60/00:00:e0:4b:61/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.546649] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.546794] ata1.00: status: { DRDY } [ 7.546847] ata1.00: failed command: READ FPDMA QUEUED [ 7.546906] ata1.00: cmd 60/00:08:90:2f:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.546907] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547053] ata1.00: status: { DRDY } [ 7.547106] ata1.00: failed command: READ FPDMA QUEUED [ 7.547165] ata1.00: cmd 60/00:10:90:30:48/01:00:02:00:00/40 tag 2 ncq 131072 in [ 7.547166] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547310] ata1.00: status: { DRDY } [ 7.547363] ata1.00: failed command: READ FPDMA QUEUED [ 7.547422] ata1.00: cmd 60/00:18:50:c7:64/01:00:02:00:00/40 tag 3 ncq 131072 in [ 7.547423] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547568] ata1.00: status: { DRDY } [ 7.547621] ata1.00: failed command: READ FPDMA QUEUED [ 7.547681] ata1.00: cmd 60/00:28:e0:4c:61/01:00:02:00:00/40 tag 5 ncq 131072 in [ 7.547682] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547825] ata1.00: status: { DRDY } [ 7.547882] ata1: hard resetting link [ 7.864408] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.886351] ata1.00: configured for UDMA/133 [ 7.886375] ata1: EH complete [ 7.890012] ata1: limiting SATA link speed to 3.0 Gbps [ 7.890016] ata1.00: exception Emask 0x10 SAct 0x7 SErr 0x280100 action 0x6 frozen [ 7.890093] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.890152] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.890210] ata1.00: failed command: READ FPDMA QUEUED [ 7.890272] ata1.00: cmd 60/00:00:90:33:48/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.890273] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890418] ata1.00: status: { DRDY } [ 7.890472] ata1.00: failed command: READ FPDMA QUEUED [ 7.890530] ata1.00: cmd 60/00:08:90:34:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.890531] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890672] ata1.00: status: { DRDY } [ 7.890724] ata1.00: failed command: READ FPDMA QUEUED [ 7.890781] ata1.00: cmd 60/78:10:e0:4f:61/00:00:02:00:00/40 tag 2 ncq 61440 in [ 7.890782] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890925] ata1.00: status: { DRDY } [ 7.890981] ata1: hard resetting link [ 8.208021] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) [ 8.230100] ata1.00: configured for UDMA/133 [ 8.230124] ata1: EH complete Looks like the SATA interface tries to use 6Gbps link, then fails miserably and Linux fallbacks to 3Gbps. This is somewhat fine for me, as the system boots successfully each time and works under high load (cd linux-3.2.8; make -j16). I've also ran memtest86+ and it did not find any errors. What concerns me more is that Grub sometimes takes a long time to load the images and/or fails to load itself completely. The error is consistent and is probablistic: that is, each time I boot I have a certain chance to fail. Actually, I have a slight suspiction on the cause of the failure. Look at the cabling: What kind of engineer does it this way? Nah. Even 1Gbps Ethernet hardly tolerates cables bent over a small angle, and there you have 6Gbps SATA. How cound I determine and fix the cause of errors and/or switch the link to 3Gbps mode permanently?

    Read the article

  • hibernate3-maven-plugin: entiries in different maven projects, hbm2ddl fails

    - by Mike
    I'm trying to put an entity in a different maven project. In the current project I have: @Entity public class User { ... private FacebookUser facebookUser; ... public FacebookUser getFacebookUser() { return facebookUser; } ... public void setFacebookUser(FacebookUser facebookUser) { this.facebookUser = facebookUser; } Then FacebookUser (in a different maven project, that's a dependency of a current project) is defined as: @Entity public class FacebookUser { ... @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } Here is my maven hibernate3-maven-plugin configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <ejb3>false</ejb3> <persistenceunit>Default</persistenceunit> <outputfilename>schema.ddl</outputfilename> <drop>false</drop> <create>true</create> <export>false</export> <format>true</format> </componentProperties> </configuration> </plugin> Here is the error I'm getting: org.hibernate.MappingException: Could not determine type for: com.xxx.facebook.model.FacebookUser, at table: user, for columns: [org.hibernate.mapping.Column(facebook_user)] I know that FacebookUser is on the classpath because if I make facebook user transient, project compiles fine: @Transient public FacebookUser getFacebookUser() { return facebookUser; }

    Read the article

  • Understanding SingleTableEntityPersister n QueryLoader

    - by Iapilgrim
    Hi, I have the Hibernate model @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Node extends StatefulEntity implements Inheritable, Cloneable { private Node _parent; private List<Node> _childNodes; .. } @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Page extends Node implements Defaultable, Securable { private RootZone _rootZone; ...... @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "root_zone_id", insertable = false, updatable = false) public RootZone getRootZone() { return _rootZone; } public void setRootZone(RootZone rootZone) { if (rootZone != null) { rootZone.setPageId(this.getId()); _rootZone = rootZone; } } I want to get all pages ( call getSiteTree), so I using this query String hpql = "SELECT n FROM Node n "; See the trace I find Page.setRootZone(RootZone) line: 155 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 BasicPropertyAccessor$BasicSetter.set(Object, Object, SessionFactoryImplementor) line: 66 PojoEntityTuplizer(AbstractEntityTuplizer).setPropertyValues(Object, Object[]) line: 352 PojoEntityTuplizer.setPropertyValues(Object, Object[]) line: 232 SingleTableEntityPersister(AbstractEntityPersister).setPropertyValues(Object, Object[], EntityMode) line: 3580 TwoPhaseLoad.initializeEntity(Object, boolean, SessionImplementor, PreLoadEvent, PostLoadEvent) line: 152 QueryLoader(Loader).initializeEntitiesAndCollections(List, Object, SessionImplementor, boolean) line: 877 QueryLoader(Loader).doQuery(SessionImplementor, QueryParameters, boolean) line: 752 QueryLoader(Loader).doQueryAndInitializeNonLazyCollections(SessionImplementor, QueryParameters, boolean) line: 259 QueryLoader(Loader).doList(SessionImplementor, QueryParameters) line: 2232 QueryLoader(Loader).listIgnoreQueryCache(SessionImplementor, QueryParameters) line: 2129 QueryLoader(Loader).list(SessionImplementor, QueryParameters, Set, Type[]) line: 2124 QueryLoader.list(SessionImplementor, QueryParameters) line: 401 QueryTranslatorImpl.list(SessionImplementor, QueryParameters) line: 363 HQLQueryPlan.performList(QueryParameters, SessionImplementor) line: 196 SessionImpl.list(String, QueryParameters) line: 1149 QueryImpl.list() line: 102 QueryImpl.getResultList() line: 67 NodeDaoImpl.getSiteTree(long) line: 358 PageNodeServiceImpl.getSiteTree(long) line: 797 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 AopUtils.invokeJoinpointUsingReflection(Object, Method, Object[]) line: 307 JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 198 $Proxy100.getSiteTree(long) line: not available the calling setRootZone in Page makes Hibernate issue a hit to database. I don't want this. So my question is + Why query String hpql = "SELECT n FROM Node n "; issues un-expected trace logs like above. Why the query String hpql = "SELECT n.nodename FROM Node n " not? What is the mechanism behind? Note: Im using hibernate caching level 2. In case I don't want to see that trace logs. I mean I just get Node data only. How to do ? Thanks for your help. Sorry for my bad english :( Van

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • Rails Google Maps integration Javascript problem

    - by JZ
    I'm working on Rails 3.0.0.beta2, following Advanced Rails Recipes "Recipe #32, Mark locations on a Google Map" and I hit a road block: I do not see a google map. My @adds view uses @adds.to_json to connect the google maps api with my model. My database contains "latitude" "longitude", as floating points. And the entire project can be accessed at github. Can you see where I'm not connecting the to_json output with the javascript correctly? Can you see other glairing errors in my javascript? Thanks in advance! My application.js file: function initialize() { if (GBrowserIsCompatible() && typeof adds != 'undefined') { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(37.4419, -122.1419), 13); map.addControl(new GLargeMapControl()); function createMarker(latlng, add) { var marker = new GMarker(latlng); var html="<strong>"+add.first_name+"</strong><br />"+add.address; GEvent.addListener(marker,"click", function() { map.openInfoWindowHtml(latlng, html); }); return marker; } var bounds = new GLatLngBounds; for (var i = 0; i < adds.length; i++) { var latlng=new GLatLng(adds[i].latitude,adds[i].longitude) bounds.extend(latlng); map.addOverlay(createMarker(latlng, adds[i])); } map.setCenter(bounds.getCenter(),map.getBoundsZoomLevel(bounds)); } } window.onload=initialize; window.onunload=GUnload; Layouts/adds.html.erb: <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=true_or_false&amp;key=ABQIAAAAeH4ThRuftWNHlwYdvcK1QBTJQa0g3IQ9GZqIMmInSLzwtGDKaBQvZChl_y5OHf0juslJRNx7TbxK3Q" type="text/javascript"></script> <% if @adds -%> <script type="text/javascript"> var maps = <%= @adds.to_json %>; </script> <% end -%>

    Read the article

  • stored as array understanding mongoid

    - by Gagan
    Hello frens, This is not a problem, but I just want to know stored as array of Mongoid better. I have following code in my model. class Company include Mongoid::Document include Mongoid::Timestamps references_many :people, :stored_as => :array, :inverse_of => :companies end class Person include Mongoid::Document include Sunspot::Mongoid references_many :companies, :stored_as => :array, :inverse_of => :people end Now in Company object we get person_ids as a result of stored_as array and company_ids in Person object. Now initially I inserted lots of person in company and the ids in person_ids fields is huge. Now I deleted most of person from company down to 8 people. Now I don't get why person_ids fields of Company object storing all the deleted ids of person. My console snapshot is follwing ruby-1.9.2-head Company.first.person_ids = [BSON::ObjectId('4d12d2907adf350695000025'), BSON::ObjectId('4d12d2907adf35069500002c'), BSON::ObjectId('4d12d2907adf350695000035'), BSON::ObjectId('4d12d2907adf35069500003f'), BSON::ObjectId('4d12d2907adf350695000048'), BSON::ObjectId('4d12d2907adf350695000052'), BSON::ObjectId('4d12d2907adf350695000059'), BSON::ObjectId('4d12d2907adf350695000062'), BSON::ObjectId('4d12d4017adf35069500008d'), BSON::ObjectId('4d12d4017adf350695000094'), BSON::ObjectId('4d12d4017adf35069500009d'), BSON::ObjectId('4d12d4017adf3506950000a7'), BSON::ObjectId('4d12d4017adf3506950000b0'), BSON::ObjectId('4d12d4017adf3506950000ba'), BSON::ObjectId('4d12d4017adf3506950000c1'), BSON::ObjectId('4d12d4017adf3506950000ca'), BSON::ObjectId('4d12d48a7adf3506950000f5'), BSON::ObjectId('4d12d48a7adf3506950000fc'), BSON::ObjectId('4d12d48a7adf350695000108'), BSON::ObjectId('4d12d48b7adf350695000115'), BSON::ObjectId('4d12d48b7adf350695000121'), BSON::ObjectId('4d12d48b7adf35069500012e'), BSON::ObjectId('4d12d48b7adf350695000135'), BSON::ObjectId('4d12d48b7adf350695000141'), BSON::ObjectId('4d12d53e7adf35069500016f'), BSON::ObjectId('4d12d53e7adf350695000176'), BSON::ObjectId('4d12d53e7adf350695000182'), BSON::ObjectId('4d12d53e7adf35069500018f'), BSON::ObjectId('4d12d53e7adf35069500019b'), BSON::ObjectId('4d12d53f7adf3506950001a8'), BSON::ObjectId('4d12d53f7adf3506950001af'), BSON::ObjectId('4d12d53f7adf3506950001bb'), BSON::ObjectId('4d12d8587adf3506950001e9'), BSON::ObjectId('4d12d8587adf3506950001f0'), BSON::ObjectId('4d12d8587adf3506950001ff'), BSON::ObjectId('4d12d8597adf35069500020f'), BSON::ObjectId('4d12d8597adf35069500021e'), BSON::ObjectId('4d12d8597adf35069500022e'), BSON::ObjectId('4d12d8597adf350695000235'), BSON::ObjectId('4d12d85a7adf350695000244'), BSON::ObjectId('4d12d9587adf35069500025b'), BSON::ObjectId('4d12db8b7adf35069500026a'), BSON::ObjectId('4d12de6f7adf3509c9000024'), BSON::ObjectId('4d12de6f7adf3509c900002b'), BSON::ObjectId('4d12de6f7adf3509c900003a'), BSON::ObjectId('4d12de707adf3509c900004a'), BSON::ObjectId('4d12de707adf3509c9000059'), BSON::ObjectId('4d12de707adf3509c9000069'), BSON::ObjectId('4d12de707adf3509c9000070'), BSON::ObjectId('4d12de717adf3509c900007f'), BSON::ObjectId('4d12e7f27adf350bd2000009'), BSON::ObjectId('4d12e81f7adf350bd2000015'), BSON::ObjectId('4d12e87f7adf350bd2000024'), BSON::ObjectId('4d12e8b87adf350bd200004c'), BSON::ObjectId('4d12e8b97adf350bd2000053'), BSON::ObjectId('4d12e8b97adf350bd200005c'), BSON::ObjectId('4d12e8b97adf350bd2000066'), BSON::ObjectId('4d12e8b97adf350bd200006f'), BSON::ObjectId('4d12e8b97adf350bd2000079'), BSON::ObjectId('4d12e8ba7adf350bd2000080'), BSON::ObjectId('4d12e8ba7adf350bd2000089'), BSON::ObjectId('4d12ee6b7adf350bd2000198'), BSON::ObjectId('4d12ee6b7adf350bd200019f'), BSON::ObjectId('4d12ee6c7adf350bd20001a5'), BSON::ObjectId('4d12ee6c7adf350bd20001ac'), BSON::ObjectId('4d12ee6c7adf350bd20001b2'), BSON::ObjectId('4d12ee6c7adf350bd20001b9'), BSON::ObjectId('4d12ee6c7adf350bd20001c0'), BSON::ObjectId('4d12ee6c7adf350bd20001c6'), BSON::ObjectId('4d141ca57adf35033e00006e'), BSON::ObjectId('4d141ca57adf35033e000075'), BSON::ObjectId('4d1420aa7adf350705000003'), BSON::ObjectId('4d1420aa7adf35070500000a'), BSON::ObjectId('4d1420f47adf350705000011'), BSON::ObjectId('4d1420f57adf350705000015'), BSON::ObjectId('4d1420f57adf350705000018'), BSON::ObjectId('4d1420f57adf35070500001c'), BSON::ObjectId('4d1420f57adf350705000023'), BSON::ObjectId('4d1420f57adf350705000026'), BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Company.first.people.collect(&:id) = [BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Isn't the Company.first.person_ids array be only storing the ids shown by Company.first.people.collect(&:id) It would be helpful if some one tell me when to best use stored_as = :array method. Do stored_as = :array increase querying performance? Thanks

    Read the article

  • why is this rails association loading individually after an eager load?

    - by codeman73
    I'm trying to avoid the N+1 queries problem with eager loading, but it's not working. The associated models are still being loaded individually. Here are the relevant ActiveRecords and their relationships: class Player < ActiveRecord::Base has_one :tableau end Class Tableau < ActiveRecord::Base belongs_to :player has_many :tableau_cards has_many :deck_cards, :through => :tableau_cards end Class TableauCard < ActiveRecord::Base belongs_to :tableau belongs_to :deck_card, :include => :card end class DeckCard < ActiveRecord::Base belongs_to :card has_many :tableaus, :through => :tableau_cards end class Card < ActiveRecord::Base has_many :deck_cards end and the query I'm using is inside this method of Player: def tableau_contains(card_id) self.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', self.tableau.id] contains = false for tableau_card in self.tableau.tableau_cards # my logic here, looking at attributes of the Card model, with # tableau_card.deck_card.card; # individual loads of related Card models related to tableau_card are done here end return contains end Does it have to do with scope? This tableau_contains method is down a few method calls in a larger loop, where I originally tried doing the eager loading because there are several places where these same objects are looped through and examined. Then I eventually tried the code as it is above, with the load just before the loop, and I'm still seeing the individual SELECT queries for Card inside the tableau_cards loop in the log. I can see the eager-loading query with the IN clause just before the tableau_cards loop as well. EDIT: additional info below with the larger, outer loop Here's the larger loop. It is inside an observer on after_save def after_save(pa) @game = Game.find(turn.game_id, :include => :goals) @game.players = Player.find :all, :include => [ {:tableau => (:tableau_cards)}, :player_goals ], :conditions => ['players.game_id =?', @game.id] for player in @game.players player.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', player.tableau.id] if(player.tableau_contains(card)) ... end end end

    Read the article

  • use jquery to toggle disabled state with a radio button

    - by hbowman
    I want to toggle two radio buttons and select fields based on which radio button is selected. I have the jQuery working, but want to know if there is a way to make it more efficient. Seems like quite a few lines for the simple goal I am trying to achieve. Here are the requirements: when the page loads, #aircraftType should be checked and #aircraftModelSelect should be grayed out (right now, the "checked" is being ignored by Firefox). If the user clicks either #aircraftType or #aircraftModel, the opposite select field should become disabled (if #aircraftModel is checked, #aircraftTypeSelect should be disabled, and vise versa). Any help on optimizing this code is appreciated. Code is up on jsfiddle too: http://jsfiddle.net/JuRKn/ $("#aircraftType").attr("checked"); $("#aircraftModel").removeAttr("checked"); $("#aircraftModelSelect").attr("disabled","disabled").addClass("disabled"); $("#aircraftType").click(function(){ $("#aircraftModelSelect").attr("disabled","disabled").addClass("disabled"); $("#aircraftTypeSelect").removeAttr("disabled").removeClass("disabled"); }); $("#aircraftModel").click(function(){ $("#aircraftTypeSelect").attr("disabled","disabled").addClass("disabled"); $("#aircraftModelSelect").removeAttr("disabled").removeClass("disabled"); }); HTML <div class="aircraftType"> <input type="radio" id="aircraftType" name="aircraft" checked /> <label for="aircraftType">Aircraft Type</label> <select size="6" multiple="multiple" id="aircraftTypeSelect" name="aircraftType"> <option value="">Light Jet</option> <option value="">Mid-Size Jet</option> <option value="">Super-Mid Jet</option> <option value="">Heavy Jet</option> <option value="">Turbo-Prop</option> </select> </div> <div class="aircraftModel"> <input type="radio" id="aircraftModel" name="aircraft" /> <label for="aircraftModel">Aircraft Model</label> <select size="6" multiple="multiple" id="aircraftModelSelect" name="aircraftModel"> <option value="">Astra SP</option> <option value="">Beechjet 400</option> <option value="">Beechjet 400A</option> <option value="">Challenger 300</option> <option value="">Challenger 600</option> <option value="">Challenger 603</option> <option value="">Challenger 604</option> <option value="">Challenger 605</option> <option value="">Citation Bravo</option> </select> </div>

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Foreign Key Relationships and "belongs to many"

    - by jan
    I have the following model: S belongs to T T has many S A,B,C,D,E (etc) have 1 T each, so the T should belong to each of A,B,C,D,E (etc) At first I set up my foreign keys so that in A, fk_a_t would be the foreign key on A.t to T(id), in B it'd be fk_b_t, etc. Everything looks fine in my UML (using MySQLWorkBench), but generating the yii models results in it thinking that T has many A,B,C,D (etc) which to me is the reverse. It sounds to me like either I need to have A_T, B_T, C_T (etc) tables, but this would be a pain as there are a lot of tables that have this relationship. I've also googled that the better way to do this would be some sort of behavior, such that A,B,C,D (etc) can behave as a T, but I'm not clear on exactly how to do this (I will continue to google more on this) What do you think is the better solution? UML: Here's the DDL (auto generated). Just pretend that there is more than 3 tables referencing T. -- ----------------------------------------------------- -- Table `mydb`.`T` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`T` ( `id` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`S` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`S` ( `id` INT NOT NULL AUTO_INCREMENT , `thing` VARCHAR(45) NULL , `t` INT NOT NULL , PRIMARY KEY (`id`) , INDEX `fk_S_T` (`id` ASC) , CONSTRAINT `fk_S_T` FOREIGN KEY (`id` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`A` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`A` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff` VARCHAR(45) NULL , `bar` VARCHAR(45) NULL , `foo` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`B` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`B` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff2` VARCHAR(45) NULL , `foobar` VARCHAR(45) NULL , `other` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`C` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`C` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff3` VARCHAR(45) NULL , `foobar2` VARCHAR(45) NULL , `other4` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB;

    Read the article

  • References between Spring beans when using a NameSpaceHandler

    - by teabot
    I'm trying to use a Spring context namespace to build some existing configuration objects in an application. I have defined a context and pretty much have if working satisfactorily - however, I'd like one bean defined by my namespace to implicitly reference another: Consider the class named 'Node': public Class Node { private String aField; private Node nextNode; public Node(String aField, Node nextNode) { ... } Now in my Spring context I have something like so: <myns:container> <myns:node aField="nodeOne"/> <myns:node aField="nodeTwo"/> </myns:container> Now I'd like nodeOne.getNode() == nodeTwo to be true. So that nodeOne.getNode() and nodeTwo refer to the same bean instance. These are pretty much the relevant parts I have in my AbstractBeanDefinitionParser: public AbstractBeanDefinition parseInternal(Element element, ParserContext parserContext) { ... BeanDefinitionBuilder containerFactory = BeanDefinitionBuilder.rootBeanDefinition(ContainerFactoryBean.class); List<BeanDefinition> containerNodes = Lists.newArrayList(); String previousNodeBeanName; // iterate backwards over the 'node' elements for (int i = nodeElements.size() - 1; i >= 0; --i) { BeanDefinitionBuilder node = BeanDefinitionBuilder.rootBeanDefinition(Node.class); node.setScope(BeanDefinition.SCOPE_SINGLETON); String nodeField = nodeElements.getAttribute("aField"); node.addConstructorArgValue(nodeField); if (previousNodeBeanName != null) { node.addConstructorArgValue(new RuntimeBeanReference(previousNodeBeanName)); } else { node.addConstructorArgValue(null); } BeanDefinition nodeDefinition = node.getBeanDefinition(); previousNodeBeanName = "inner-node-" + nodeField; parserContext.getRegistry().registerBeanDefinition(previousNodeBeanName, nodeDefinition); containerNodes.add(node); } containerFactory.addPropertyValue("nodes", containerNodes); } When the application context is created my Node instances are created and recognized as singletons. Furthermore, the nextNode property is populated with a Node instance with the previous nodes configuration - however, it isn't the same instance. If I output a log message in Node's constructor I see two instances created for each node bean definition. I can think of a few workarounds myself but I'm keen to use the existing model. So can anyone tell me how I can pass these runtime bean references so that I get the correct singleton behaviour for my Node instances?

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • Would a Centralized Blogging Service Work?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models. My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that the main point of a website? Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook?

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >