Search Results

Search found 384 results on 16 pages for 'eager'.

Page 13/16 | < Previous Page | 9 10 11 12 13 14 15 16  | Next Page >

  • return only the last select results from stored procedure

    - by Madalina Dragomir
    The requirement says: stored procedure meant to search data, based on 5 identifiers. If there is an exact match return ONLY the exact match, if not but there is an exact match on the not null parameters return ONLY these results, otherwise return any match on any 4 not null parameters... and so on My (simplified) code looks like: create procedure xxxSearch @a nvarchar(80), @b nvarchar(80)... as begin select whatever from MyTable t where ((@a is null and t.a is null) or (@a = t.a)) and ((@b is null and t.b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin select whatever from MyTable t where ((@a is null) or (@a = t.a)) and ((@b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin ... end end end As a result there can be more sets of results selected, the first ones empty and I only need the last one. I know that it is easy to get the only the last result set on the application side, but all our stored procedure calls go through a framework that expects the significant results in the first table and I'm not eager to change it and test all the existing SPs. Is there a way to return only the last select results from a stored procedure? Is there a better way to do this task ?

    Read the article

  • What is best practice about having one-many hibernate

    - by Patrick
    Hi all, I believe this is a common scenario. Say I have a one-many mapping in hibernate Category has many Item Category: @OneToMany( cascade = {CascadeType.ALL},fetch = FetchType.LAZY) @JoinColumn(name="category_id") @Cascade( value = org.hibernate.annotations.CascadeType.DELETE_ORPHAN ) private List<Item> items; Item: @ManyToOne(targetEntity=Category.class,fetch=FetchType.EAGER) @JoinColumn(name="category_id",insertable=false,updatable=false) private Category category; All works fine. I use Category to fully control Item's life cycle. But, when I am writing code to update Category, first I get Category out from DB. Then pass it to UI. User fill in altered values for Category and pass back. Here comes the problem. Because I only pass around Category information not Item. Therefore the Item collection will be empty. When I call saveOrUpdate, it will clean out all associations. Any suggestion on what's best to address this? I think the advantage of having Category controls Item is to easily main the order of Item and not to confuse bi-directly. But what about situation that you do want to just update Category it self? Load it first and merge? Thank you.

    Read the article

  • Tree deletion with NHibernate

    - by Tigraine
    Hi, I'm struggling with a little problem and starting to arrive at the conclusion it's simply not possible. I have a Table called Group. As with most of these systems Group has a ParentGroup and a Children collection. So the Table Group looks like this: Group -ID (PK) -Name -ParentId (FK) I did my mappings using FNH AutoMappings, but I had to override the defaults for this: p.References(x => x.Parent) .Column("ParentId") .Cascade.All(); p.HasMany(x => x.Children) .KeyColumn("ParentId") .ForeignKeyCascadeOnDelete() .Cascade.AllDeleteOrphan() .Inverse(); Now, the general idea was to be able to delete a node and all of it's children to be deleted too by NH. So deleting the only root node should basically clear the whole table. I tried first with Cascade.AllDeleteOrphan but that works only for deletion of items from the Children collection, not deletion of the parent. Next I tried ForeignKeyCascadeOnDelete so the operation gets delegated to the Database through on delete cascade. But once I do that MSSql2008 does not allow me to create this constraint, failing with : Introducing FOREIGN KEY constraint 'FKBA21C18E87B9D9F7' on table 'Group' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Well, and that's it for me. I guess I'll just loop through the children and delete them one by one, thus doing a N+1. If anyone has a suggestion on how do that more elegantly I'd be eager to hear it.

    Read the article

  • Specifying different initial values for fields in inherited models (django)

    - by Shawn Chin
    Question : What is the recommended way to specify an initial value for fields if one uses model inheritance and each child model needs to have different default values when rendering a ModelForm? Take for example the following models where CompileCommand and TestCommand both need different initial values when rendered as ModelForm. # ------ models.py class ShellCommand(models.Model): command = models.Charfield(_("command"), max_length=100) arguments = models.Charfield(_("arguments"), max_length=100) class CompileCommand(ShellCommand): # ... default command should be "make" class TestCommand(ShellCommand): # ... default: command = "make", arguments = "test" I am aware that one can used the initial={...} argument when instantiating the form, however I would rather store the initial values within the context of the model (or at least within the associated ModelForm). My current approach What I'm doing at the moment is storing an initial value dict within Meta, and checking for it in my views. # ----- forms.py class CompileCommandForm(forms.ModelForm): class Meta: model = CompileCommand initial_values = {"command":"make"} class TestCommandForm(forms.ModelForm): class Meta: model = TestCommand initial_values = {"command":"make", "arguments":"test"} # ------ in views FORM_LOOKUP = { "compile": CompileCommandFomr, "test": TestCommandForm } CmdForm = FORM_LOOKUP.get(command_type, None) # ... initial = getattr(CmdForm, "initial_values", {}) form = CmdForm(initial=initial) This feels too much like a hack. I am eager for a more generic / better way to achieve this. Suggestions appreciated. Other attempts I have toyed around with overriding the constructor for the submodels: class CompileCommand(ShellCommand): def __init__(self, *args, **kwargs): kwargs.setdefault('command', "make") super(CompileCommand, self).__init__(*args, **kwargs) and this works when I try to create an object from the shell: >>> c = CompileCommand(name="xyz") >>> c.save() <CompileCommand: 123> >>> c.command 'make' However, this does not set the default value when the associated ModelForm is rendered, which unfortunately is what I'm trying to achieve. Update 2 (looks promising) I now have the following in forms.py which allow me to set Meta.default_initial_values without needing extra code in views. class ModelFormWithDefaults(forms.ModelForm): def __init__(self, *args, **kwargs): if hasattr(self.Meta, "default_initial_values"): kwargs.setdefault("initial", self.Meta.default_initial_values) super(ModelFormWithDefaults, self).__init__(*args, **kwargs) class TestCommandForm(ModelFormWithDefaults): class Meta: model = TestCommand default_initial_values = {"command":"make", "arguments":"test"}

    Read the article

  • JPA/Hibernate Parent/Child relationship

    - by NubieJ
    Hi I am quite new to JPA/Hibernate (Java in general) so my question is as follows (note, I have searched far and wide and have not come across an answer to this): I have two entities: Parent and Child (naming changed). Parent contains a list of Children and Children refers back to parent. e.g. @Entity public class Parent { @Id @Basic @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "PARENT_ID", insertable = false, updatable = false) private int id; /* ..... */ @OneToMany(cascade = { CascadeType.ALL }, fetch = FetchType.LAZY) @JoinColumn(name = "PARENT_ID", referencedColumnName = "PARENT_ID", nullable = true) private Set<child> children; /* ..... */ } @Entity public class Child { @Id @Basic @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "CHILD_ID", insertable = false, updatable = false) private int id; /* ..... */ @ManyToOne(cascade = { CascadeType.REFRESH }, fetch = FetchType.EAGER, optional = false) @JoinColumn(name = "PARENT_ID", referencedColumnName = "PARENT_ID") private Parent parent; /* ..... */ } I want to be able to do the following: Retrieve a Parent entity which would contain a list of all its children (List), however, when listing Parent (getting List, it of course should omit the children from the results, therefore setting FetchType.LAZY. Retrieve a Child entity which would contain an instance of the Parent entity. Using the code above (or similar) results in two exceptions: Retrieving Parent: A cycle is detected in the object graph. This will cause infinitely deep XML... Retrieving Child: org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: xxxxxxxxxxx, no session or session was closed When retrieving the Parent entity, I am using a named query (i.e. calling it specifically) @NamedQuery(name = "Parent.findByParentId", query = "SELECT p FROM Parent AS p LEFT JOIN FETCH p.children where p.id = :id") Code to get Parent (i.e. service layer): public Parent findByParentId(int parentId) { Query query = em.createNamedQuery("Parent.findByParentId"); query.setParameter("id", parentId); return (Parent) query.getSingleResult(); } Why am I getting a LazyInitializationException event though the List property on the Parent entity is set as Lazy (when retrieving the Child entity)?

    Read the article

  • Not loading associations without proxies in NHibernate

    - by Alice
    I don't like the idea of proxy and lazy loading. I don't need that. I want pure POCO. And I want to control loading associations explicitly when I need. Here is entity public class Post { public long Id { get; set; } public long OwnerId { get; set; } public string Content { get; set; } public User Owner { get; set; } } and mapping <class name="Post"> <id name="Id" /> <property name="OwnerId" /> <property name="Content" /> <many-to-one name="Owner" column="OwnerId" /> </class> However if I specify lazy="false" in the mapping, Owner is always eagerly fetched. I can't remove many-to-one mapping because that also disables explicit loading or a query like from x in session.Query<Comment>() where x.Owner.Title == "hello" select x; I specified lazy="true" and set use_proxy_validator property to false. But that also eager loads Owner. Is there any way to load only Post entity?

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • Hibernate: fetching multiple bags efficiently

    - by Jens Jansson
    Hi! I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale. Let's take an example an entity, let's say a book -object. public class Book{ @OneToMany private List<LocalizedString> names; @OneToMany private List<LocalizedString> description; //and so on... } When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user. This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log. I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue. Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case. I'm starting to run out of ideas on how to optimize this. So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Playframework sends 2 queries for fetched query

    - by MRu
    I currently have problems with the JPA at the play framework 1.2.4. I need to have a UserOptions model in a separate database and want to join it lazyly cause its only needed in one query. In this query I want to load the options eagerly and by searching I found out that can only be done by using a join query. If I use eager instead oder lazy, everything would be fine by using User.findById() and the options and the user is found in one query. But play sends two queries when I use a 'left join fetch' query. So heres the query: User.find(" SELECT user FROM User user LEFT JOIN FETCH user.options options WHERE user.id = ? ", Long.parseLong(id)).first(); And here the models: @Entity public class User extends Model { @OneToOne(mappedBy = "user", fetch = FetchType.LAZY) public UserOptions options; // ... } @Entity public class UserOptions extends Model { @OneToOne(fetch = FetchType.LAZY) public User user; } The question is why play sends two query for the fetch query? Thanks in advance

    Read the article

  • How do I determine whether calculation was completed, or detect interrupted calculation?

    - by BenTobin
    I have a rather large workbook that takes a really long time to calculate. It used to be quite a challenge to get it to calculate all the way, since Excel is so eager to silently abort calculation if you so much as look at it. To help alleviate the problem, I created some VBA code to initiate the the calculation, which is initiated by a form, and the result is that it is not quite as easy to interrupt the calculation process, but it is still possible. (I can easily do this by clicking the close X on the form, but I imagine there are other ways) Rather than taking more steps to try and make it harder to interrupt calculation, I'd like to have the code detect whether calculation is complete, so it can notify the user rather than just blindly forging on into the rest of the steps in my code. So far, I can't find any way to do that. I've seen references to Application.CalculationState, but the value is xlDone after I interrupt calculation, even if I interrupt the calculation after a few seconds (it normally takes around an hour). I can't think of a way to do this by checking the value of cells, since I don't know which one is calculated last. I see that there is a way to mark cells as "dirty" but I haven't been able to find a way to check the dirtiness of a cell. And I don't know if that's even the right path to take, since I'd likely have to check every cell in every sheet. The act of interrupting calculation does not raise an error, so my ON ERROR doesn't get triggered. Is there anything I'm missing? Any ideas? Any ideas?

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • php transform array into multidim array

    - by fverswijver
    So I'm working on a website with Doctrine as ORM and I get the following array back as a result: Array ( [0] => Array ( [c_cat_id] => 1 [c_title] => Programas e projetos [p_menu] => PBA BR 163 [p_page_id] => 1 ) [1] => Array ( [c_cat_id] => 1 [c_title] => Programas e projetos [p_menu] => Outros projetos [p_page_id] => 3 ) ) Is it possible to transform this array (in PHP) to something like this: Array ( [0] => Array ( [c_cat_id] => 1 [c_title] => Programas e projetos [pages] => Array ([p_page_id] => 1 [p_menu] => PBA BR 163, [p_page_id] => 3 [p_menu] => Outros projetos)) Thanks for your help, always eager to learn new ways of doing things and that's why I love StackOverflow ;)

    Read the article

  • What job title should be most suitable for my object in resume and what salary range should I expect

    - by user354177
    I was a classic asp developer in 2000. After a year of full-time employment, I left the field. I found a part-time position as an asp developer again in 2005 and taught myself vb.net. In 2007, I got the current full-time job as an Asp.net web developer. I taught myself C#, LING t0 SQL, Web Services, AJAX, and creating all kinds of reports with reporting services. One and half years ago, I sent myself to part-time graduate program in Database and Web Systems. I'll have two semesters to go and so far my GPA is 4.0/4.0. My job responsibility is to collect business requirements from other departments, design the database, write stored procedures, create aspx pages, and create reports. I love what I do and want to advance my career to the next level. What I enjoy most is to design the relational database. I would want to become an .Net Architect eventually. I got an interview. They were looking for asp.net web developer. But I was surprised and disappointed that position would only create aspx pages. I would not even have opportunity to write stored procedures, let alone design the database (those would be provided by another group). Furthermore, they asked me some detailed questions about web forms, some of which I did not know the answers. they might be disappointed as well. I am eager to learn and can apply what I learn to real projects right away. I believe no matter what specific skills I am lacking for a new position, I can catch up quickly. I am looking for $70k range job. The object in my resume is Experience C# Web Application Developer. Due to the experience from last interview, I wonder if the object is really what I want. Could somebody answer my questions? Thank you.

    Read the article

  • Using the JPA Criteria API, can you do a fetch join that results in only one join?

    - by Shaun
    Using JPA 2.0. It seems that by default (no explicit fetch), @OneToOne(fetch = FetchType.EAGER) fields are fetched in 1 + N queries, where N is the number of results containing an Entity that defines the relationship to a distinct related entity. Using the Criteria API, I might try to avoid that as follows: CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<MyEntity> query = builder.createQuery(MyEntity.class); Root<MyEntity> root = query.from(MyEntity.class); Join<MyEntity, RelatedEntity> join = root.join("relatedEntity"); root.fetch("relatedEntity"); query.select(root).where(builder.equals(join.get("id"), 3)); The above should ideally be equivalent to the following: SELECT m FROM MyEntity m JOIN FETCH myEntity.relatedEntity r WHERE r.id = 3 However, the criteria query results in the root table needlessly being joined to the related entity table twice; once for the fetch, and once for the where predicate. The resulting SQL looks something like this: SELECT myentity.id, myentity.attribute, relatedentity2.id, relatedentity2.attribute FROM my_entity myentity INNER JOIN related_entity relatedentity1 ON myentity.related_id = relatedentity1.id INNER JOIN related_entity relatedentity2 ON myentity.related_id = relatedentity2.id WHERE relatedentity1.id = 3 Alas, if I only do the fetch, then I don't have an expression to use in the where clause. Am I missing something, or is this a limitation of the Criteria API? If it's the latter, is this being remedied in JPA 2.1 or are there any vendor-specific enhancements? Otherwise, it seems better to just give up compile-time type checking (I realize my example doesn't use the metamodel) and use dynamic JPQL TypedQueries.

    Read the article

  • Mapping many-to-many association table with extra column(s)

    - by user635524
    My database contains 3 tables: User and Service entities have many-to-many relationship and are joined with the SERVICE_USER table as follows: USERS - SERVICE_USER - SERVICES SERVICE_USER table contains additional BLOCKED column. What is the best way to perform such a mapping? These are my Entity classes @Entity @Table(name = "USERS") public class User implements java.io.Serializable { private String userid; private String email; @Id @Column(name = "USERID", unique = true, nullable = false,) public String getUserid() { return this.userid; } .... some get/set methods } @Entity @Table(name = "SERVICES") public class CmsService implements java.io.Serializable { private String serviceCode; @Id @Column(name = "SERVICE_CODE", unique = true, nullable = false, length = 100) public String getServiceCode() { return this.serviceCode; } .... some additional fields and get/set methods } I followed this example http://giannigar.wordpress.com/2009/09/04/m ... using-jpa/ Here is some test code: User user = new User(); user.setEmail("e2"); user.setUserid("ui2"); user.setPassword("p2"); CmsService service= new CmsService("cd2","name2"); List<UserService> userServiceList = new ArrayList<UserService>(); UserService userService = new UserService(); userService.setService(service); userService.setUser(user); userService.setBlocked(true); service.getUserServices().add(userService); userDAO.save(user); The problem is that hibernate persists User object and UserService one. No success with the CmsService object I tried to use EAGER fetch - no progress Is it possible to achieve the behaviour I'm expecting with the mapping provided above? Maybe there is some more elegant way of mapping many to many join table with additional column?

    Read the article

  • Hibernate OneToMany and ManyToOne confusion! Null List!

    - by squizz
    I have two tables... For example - Company and Employee (let's keep this real simple) Company( id, name ); Employee( id, company_id ); Employee.company_id is a foreign key. My entity model looks like this... Employee @ManyToOne(cascade = CascadeType.PERSIST) @JoinColumn(name = "company_id") Company company; Company @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinColumn(name = "company_id") List<Employee> employeeList = new ArrayList<Employee>(); So, yeah I want a list of employees for a company. When I do the following... Employee e = new Employee(); e.setCompany(c); //c is an Company that is already in the database. DAO.insertEmployee(e); //this works fine! If I then get my Company object it's list is empty! Ive tried endless different ways from the Hibernate documentation! Obviously not tried the correct one yet! I just want the list to be populated for me or find out a sensible alternative. Help would be greatly appreciated, thanks!

    Read the article

  • Yii 'limit' on related model's scope

    - by pethee
    I have a model called Guesses that has_many Comments. I'm making eager queries to this to then pass on as JSON as response to an API call. The relations are obviously set between the two models and they are correct(one2many <= belongs2) I added a scope to Comments called 'api' like this: public function scopes() { return array( 'api' => array( 'select' => 'id, comment, date', 'limit'=>3, 'order'=>'date DESC', 'together'=>true, ), ); } And I'm running the following one-liner query: $data = Guesses::model()->with('comments:api')->findAll(); The issue here is that when calling the 'api' scope using a with('relation'), the limit property simply doesn't apply. I added the 'together'=true there for another type of scope, plus I hear it might help. It doesn't make a difference. I don't need all the comments of all Guesses. I want the top 3 (or 5). I am also trying to keep the one-liner call intact and simple, manage everything through scopes, relations and parameterized functions so that the API call itself is clean and simple. Any advice?

    Read the article

  • JPA entity relations are not populated after .persist()

    - by Tomik
    Hello, this is a sample of my two entities: @Entity public class Post implements Serializable { @OneToMany(mappedBy = "post", fetch = javax.persistence.FetchType.EAGER) @OrderBy("revision DESC") public List<PostRevision> revisions; @Entity(name="post_revision") public class PostRevision implements Serializable { @ManyToOne public Post post; private Integer revision; @PrePersist private void prePersist() { List<PostRevision> list = post.revisions; if(list.size() >= 1) revision = list.get(list.size() - 1).revision + 1; else revision = 1; } So, there's a "post" and it can have several revisions. During persisting of the revision, entity takes a look at the list of the existing revisions and finds the next revision number. Problem is that Post.revisions is NULL but I think it should be automatically populated. I guess there's some kind of problem in my source code but I don't know where. Here's my "persistence" code: Post post = new Post(); PostRevision revision = new PostRevision(); revision.post = post; em.persist(post); em.persist(revision); em.flush(); I think that after persisting "post", it becomes "managed" and all the relations should be populated from now on. Thanks for help! (Note: public attributes are just for demonstration)

    Read the article

  • How to add IP range to a server?

    - by Efe Cakinberk
    Hello, First of all I must say that I'm ver inexperienced server and network user. But I rented a unmanaged dedicated server. Well I didn't know what unmanaged really means, then I learned it when I needed support. Well I must do everything by myself. I have a problem. I had already 4 IPs on my server when I rented it. But then I needed more Ips and my server assigned me 32 Ips in which I can only use 27 of them. 85.25.230.0 - 85.25.230.31 this is my Ip range and they say the following Ips must not be configured on the server: 85.25.230.0 - network address 85.25.230.1 - gateway address 85.25.230.2 - router redundancy 85.25.230.3 - router redundancy 85.25.230.31 - broadcast address But the problem is ok Ips are assigned to me but they are not setup on my server. How will I setup Ips to work on my server? I did this after my reseach on google: I used this command on command prompt: route add 85.25.230.0 mask 255.255.255.224 85.25.230.1 metric 1 if then it said OK!. and I thought they should be working. (btw, mask is given to me by my ISP and I don't know metric 1 and if means I just saw it on the net and write it here) I setup my domains via using Plesk Kontrol Panel. So i added one domain and setup one of my new Ips 85.25.230.5 to it. But no it is not working. When I visit the domain via browser, there is a Plesk page comes and says this domain is not configured on the server. Then I changed the domains Ip to one of my old Ips which are given to me with the server and which I have been using for my other domains for a long time. Ok in a second, domain started working. I set it back to my new Ip and domain did not work. As I said, I'm not an expert and do not now the logic. But I'm eager to learn. Can you tell me what might couse the problem and did I do wrong while setting up IP RANGE to my server, if so how can I set them up? Thank you, Efe

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • mysql not starting

    - by Eiriks
    I have a server running on rackspace.com, it been running for about a year (collecting data for a project) and no problems. Now it seems mysql froze (could not connect either through ssh command line, remote app (sequel pro) or web (pages using the db just froze). I got a bit eager to fix this quick and rebooted the virtual server, running ubuntu 10.10. It is a small virtual LAMP server (10gig storage - I'm only using 1, 256mb ram -has not been a problem). Now after the reboot, I cannot get mysql to start again. service mysql status mysql stop/waiting I believe this just means mysql is not running. How do I get this running again? service mysql start start: Job failed to start No. Just typing 'mysql' gives: mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) There is a .sock file in this folder, 'ls -l' gives: srwxrwxrwx 1 mysql mysql 0 2012-12-01 17:20 mysqld.sock From googleing this for a while now, I see that many talk about the logfile and my.cnf. Logs Not sure witch ones I should look at. This log-file is empty: 'var/log/mysql/error.log', so is the 'var/log/mysql.err' and 'var/log/mysql.log'. my.cnf is located in '/etc/mysql' and looks like this. Can't see anything clearly wrong with it either. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ I need the data in the database (so i'd like to avoid reinstalling), and I need it back up running again. All hint, tips and solutions are welcomed and appreciated.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16  | Next Page >