Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 143/186 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • How can I remove duplicate nodes in XQuery?

    - by Brabster
    I have an XML document I generate on the fly, and I need a function to eliminate any duplicate nodes from it. My function looks like: declare function local:start2() { let $data := local:scan_books() return <books>{$data}</books> }; Sample output is: <books> <book> <title>XML in 24 hours</title> <author>Some Guy</author> </book> <book> <title>XML in 24 hours</title> <author>Some Guy</author> </book> </books> I want just the one entry in my books root tag, and there are other tags, like say pamphlet in there too that need to have duplicates removed. Any ideas? Updated following comments. By unique nodes, I mean remove multiple occurrences of nodes that have the exact same content and structure.

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Using the groupby method in Python, example included

    - by randombits
    Trying to work with groupby so that I can group together files that were created on the same day. When I say same day in this case, I mean the dd part in mm/dd/yyyy. So if a file was created on March 1 and April 1, they should be grouped together because the "1" matches. Here's the code I have so far: #!/usr/bin/python import os import datetime from itertools import groupby def created_ymd(fn): ts = os.stat(fn).st_ctime dt = datetime.date.fromtimestamp(ts) return dt.year, dt.month, dt.day def get_files(): files = [] for f in os.listdir(os.getcwd()): if not os.path.isfile(f): continue y,m,d = created_ymd(f) files.append((f, d)) return files files = get_files() for key, group in groupby(files, lambda x: x[1]): for file in group: print "file: %s, date: %s" % (file[0], key) print " " The problem is, I get lots of files that get grouped together based on the day. But then I'll see multiple groups with the same day. Meaning I might have 4 files grouped that were created on the 17th. Later on I'll see another unique set of 2 files that are also created on the 17th. Where am I going wrong?

    Read the article

  • When does it makes sense to use a map?

    - by kiwicptn
    I am trying to round up cases when it makes sense to use a map (set of key-value entries). So far I have five categories (see below). Assuming more exist, what are they? Please limit each answer to one unique category, put up an example, and vote up the fascinating ones. Property values (like a bean) age -> 30 sex -> male loc -> calgary Histograms peter -> 1 john -> 7 paul -> 5 Presence, with O(1) performance peter -> 1 john -> 1 paul -> 1 Functions peter -> treatPeter() john -> dealWithJohn() paul -> managePaul() Conversion peter -> pierre john -> jean paul -> paul

    Read the article

  • jQuery ajax form submit - multiple post/get requests caused when validation fails

    - by kenny99
    Hi, I have a problem where if a form submission (set up to submit via AJAX) fails validation, the next time the form is submitted, it doubles the number of post requests - which is definitely not what I want to happen. I'm using the jQuery ValidationEngine plugin to submit forms and bind validation messages to my fields. This is my code below. I think my problem may lie in the fact that I'm retrieving the form action attribute via $('.form_container form').attr('action') - I had to do it this way as using $(this).attr was picking up the very first loaded form's action attr - however, when i was loading new forms into the page it wouldn't pick up the new form's action correctly, until i removed the $(this) selector and referenced it using the class. Can anyone see what I might be doing wrong here? (Note I have 2 similar form handlers which are loaded on domready, that could also be an issue) $('input#eligibility').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { //unique stuff for this form } }); }); //generic form handler - all form submissions not flagged with the #eligibility id $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); });

    Read the article

  • Duplicate entries on mysql on insert using doctrine

    - by Nikos Galis
    Hi all! I am facing a very weird problem with mysql and doctrine [with help of codeIgniter]. I am trying to make a simple migration script taking all records from one table and after a little process, saving them to another. However, on my laptop [running windows and wamp] I get double numbers of the original table records to have been copied to the destination table. In my colleagues' laptops, everything works fine! We are all using mysql 5.0.86 [plus windows plus wamp]. Here is the code : function buggy_function(){ $this->db(); //get db connection $q = Doctrine_Query::create()->from('Oldtable r'); $oldrecords = $q->fetchArray(); $count = 0; foreach ($oldrecords as $oldrecord){ $newrecord = new NewTableClass(); $newrecord->password = md5($oldrecord['password']); $newrecord->save(); echo $newrecord->id. ' Id -> saved.' } } Simple as that! I have 39 records on the Old table and I am getting 78 records in the new table, which are exactly the same records, except from the unique primary key. It seems as if the script runs twice. But the output of the script is the following : 1 Id -> saved. 2 Id -> saved. ... ... 39 Id -> saved. Do you have any idea why this is happening? Any known bug for mysql? Thank you in advanced!'

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are alot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Written as Pseudo code is perfect. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician.

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • Force Oracle error on fetch

    - by Dan
    I am trying to debug a strange behavior in my application. In order to do so, I need to reproduce a scenario where an SQL SELECT query will throw an error, but only while actually fetching from the cursor, not while executing the query itself. Can this be done? Any error will do, but ORA-01722: invalid number seems like the obvious one to try. I created a table with the follwing: KEYCOL INTEGER PRIMARY KEY OTHERCOL VARCHAR2(100) I then created a few hundred rows with unique values for the primary key and the value l for the othercol. I then ran a SELECT * query, picked a row somewhere in the middle, and updated it to the string abcd. I ran the query SELECT KEYCOL, TO_NUMBER(OTHERCOL) FROM SOMETABLE hoping to get some rows of good data an then an error later. But I keep getting ORA-01722: invalid number on the execute step itself. I have gotten this behavior programmatically using ADO (with server-side cursor) and JDBC, as well as from PL/SQL Developer. How can I get the result I'm looking for? thanks Edit - meant to add, when using ADO, I am only calling Command.Execute. I am not creating or opening a Recordset.

    Read the article

  • django: grouping in an order_by query?

    - by AP257
    Hi all, I want to allocate rankings to users, based on a points field. Easy enough you'd think with an order_by query. But how do I deal with the situation where two users have the same number of points and need to share the same ranking? Should I use annotate to find users with the same number of points? My current code, and a pseudocode description of what I'd like to do, are below. top_users = User.objects.filter(problem_user=False).order_by('-points_total') # Wrong - in pseudocode, this should be # Get the highest points_total, find all the users with that points_total, # if there is more than one user, set status to 'Joint first prize', # otherwise set status to 'First prize' top_users[0].status = "First prize" if (top_users[1]): top_users[1].status = "Second prize" if (top_users[2]): top_users[2].status = "Third prize" if (top_users[3]): top_users[3:].status = "Highly commended" The code above doesn't deal with the situation where two users have the same number of points and need to share second prize. I guess I need to create a query that looks for unique values of points_total, and does some kind of nested ranking? It also doesn't cope with the fact that sometimes there are fewer than 4 users - does anyone know how I can do (in pseudocode) 'if top_users[1] is not null...' in Python?

    Read the article

  • SQL INSTR() using CSV. Need exact match rather than part

    - by Alastair Pitts
    This is a follow up issue relating to the answer for http://stackoverflow.com/questions/2445029/sql-placeholder-in-where-in-issue-inserted-strings-fail Quick background: We have a SQL query that uses a placeholder value to accept a string, which represents a unique tag/id. Usually, this is only a single tag, but we needed the ability to use a csv string for multiple tags, returning a combined result. In the answer we received from the vendor, they suggested the use of the INSTR function, ala: select * from pitotal where tag IN (SELECT tag from pipoint WHERE INSTR(?, tag) <> 0) and time between 'y' and 't' This works perfectly well 99% of the time, the issue is when the tag is also a subset of 2 parts of the CSV string. Eg the placeholder value is: 'northdom,southdom,eastdom,westdom' and possible tags include: north or northdom What happens, as north is a subset of northdom, is that the two tags are return instead of just northdom, which is actually what we want. I'm not strong on SQL so I couldn't work out how to set it as exact, or split the csv string, so help would be appreciated. Is there a way to split the csv string or make it look for an exact match?

    Read the article

  • Programming Technique: How to create a simple card game

    - by Shyam
    Hi, As I am learning the Ruby language, I am getting closer to actual programming. So I was thinking of creating a simple card game. My question isn't Ruby orientated, but I do know want to learn how to solve this problem with a genuine OOP approach. In my card game I want to have four players. Using a standard deck with 52 cards, no jokers/wildcards. In the game I won't use the Ace as a dual card, it is always the highest card. So, the programming problems I wonder about are the following: How can I sort/randomize the deck of cards? There are four types, each having 13 values. Eventually there can be only unique values, so picking random values could generate duplicates. How can I implement a simple AI? As there are tons of card games, someone would have figured this part out already, so references would be great. I am a truly Ruby nuby, and my goal here is to learn to solve problems, so pseudo code would be great, just to understand how to solve the problem programmatically. I apologize for my grammar and writing style if it's unclear, for it is not my native language. Also pointers to sites where such challenges are explained, would be a great resource! Thank you for your comments, answers and feedback!

    Read the article

  • CharField values disappearing after save (readonly field)

    - by jamida
    I'm implementing simple "grade book" application where the teacher would be able to update the grades w/o being allowed to change the students' names (at least not on the update grade page). To do this I'm using one of the read-only tricks, the simplest one. The problem is that after the SUBMIT the view is re-displayed with 'blank' values for the students. I'd like the students' names to re-appear. Below is the simplest example that exhibits this problem. (This is poor DB design, I know, I've extracted just the relevant parts of the code to showcase the problem. In the real example, student is in its own table but the problem still exists there.) models.py class Grade1(models.Model): student = models.CharField(max_length=50, unique=True) finalGrade = models.CharField(max_length=3) class Grade1OForm(ModelForm): student = forms.CharField(max_length=50, required=False) def __init__(self, *args, **kwargs): super(Grade1OForm,self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) if instance and instance.id: self.fields['student'].widget.attrs['readonly'] = True self.fields['student'].widget.attrs['disabled'] = 'disabled' def clean_student(self): instance = getattr(self,'instance',None) if instance: return instance.student else: return self.cleaned_data.get('student',None) class Meta: model=Grade1 views.py from django.forms.models import modelformset_factory def modifyAllGrades1(request): gradeFormSetFactory = modelformset_factory(Grade1, form=Grade1OForm, extra=0) studentQueryset = Grade1.objects.all() if request.method=='POST': myGradeFormSet = gradeFormSetFactory(request.POST, queryset=studentQueryset) if myGradeFormSet.is_valid(): myGradeFormSet.save() info = "successfully modified" else: myGradeFormSet = gradeFormSetFactory(queryset=studentQueryset) return render_to_response('grades/modifyAllGrades.html',locals()) template <p>{{ info }}</p> <form method="POST" action=""> <table> {{ myGradeFormSet.management_form }} {% for myform in myGradeFormSet.forms %} {# myform.as_table #} <tr> {% for field in myform %} <td> {{ field }} {{ field.errors }} </td> {% endfor %} </tr> {% endfor %} </table> <input type="submit" value="Submit"> </form>

    Read the article

  • R + Bioconductor : combining probesets in an ExpressionSet

    - by Mike Dewar
    Hi, First off, this may be the wrong Forum for this question, as it's pretty darn R+Bioconductor specific. Here's what I have: library('GEOquery') GDS = getGEO('GDS785') cd4T = GDS2eSet(GDS) cd4T <- cd4T[!fData(cd4T)$symbol == "",] Now cd4T is an ExpressionSet object which wraps a big matrix with 19794 rows (probesets) and 15 columns (samples). The final line gets rid of all probesets that do not have corresponding gene symbols. Now the trouble is that most genes in this set are assigned to more than one probeset. You can see this by doing gene_symbols = factor(fData(cd4T)$Gene.symbol) length(gene_symbols)-length(levels(gene_symbols)) [1] 6897 So only 6897 of my 19794 probesets have unique probeset - gene mappings. I'd like to somehow combine the expression levels of each probeset associated with each gene. I don't care much about the actual probe id for each probe. I'd like very much to end up with an ExpressionSet containing the merged information as all of my downstream analysis is designed to work with this class. I think I can write some code that will do this by hand, and make a new expression set from scratch. However, I'm assuming this can't be a new problem and that code exists to do it, using a statistically sound method to combine the gene expression levels. I'm guessing there's a proper name for this also but my googles aren't showing up much of use. Can anyone help?

    Read the article

  • Hibernate one-to-one mapping

    - by Andrey Yaskulsky
    I have one-to-one hibernate mapping between class Student and class Points: @Entity @Table(name = "Users") public class Student implements IUser { @Id @Column(name = "id") private int id; @Column(name = "name") private String name; @Column(name = "password") private String password; @OneToOne(fetch = FetchType.EAGER, mappedBy = "student") private Points points; @Column(name = "type") private int type = getType(); //gets and sets... @Entity @Table(name = "Points") public class Points { @GenericGenerator(name = "generator", strategy = "foreign", parameters = @Parameter(name = "property", value = "student")) @Id @GeneratedValue(generator = "generator") @Column(name = "id", unique = true, nullable = false) private int Id; @OneToOne @PrimaryKeyJoinColumn private Student student; //gets and sets And then i do: Student student = new Student(); student.setId(1); student.setName("Andrew"); student.setPassword("Password"); Points points = new Points(); points.setPoints(0.99); student.setPoints(points); points.setStudent(student); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); session.save(student); session.getTransaction().commit(); And hibernate saves student in the table but not saves corresponding points. Is it OK? Should i save points separately?

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • RESTfully Nesting Resource Routes with Single Identifiers

    - by Craig Walker
    In my Rails app I have a fairly standard has_many relationship between two entities. A Foo has zero or more Bars; a Bar belongs to exactly one Foo. Both Foo and Bar are identified by a single integer ID value. These values are unique across all of their respective instances. Bar is existence dependent on Foo: it makes no sense to have a Bar without a Foo. There's two ways to RESTfully references instances of these classes. Given a Foo.id of "100" and a Bar.id of "200": Reference each Foo and Bar through their own "top-level" URL routes, like so: /foo/100 /bar/200 Reference Bar as a nested resource through its instance of Foo: /foo/100 /foo/100/bar/200 I like the nested routes in #2 as it more closely represents the actual dependency relationship between the entities. However, it does seem to involve a lot of extra work for very little gain. Assuming that I know about a particular Bar, I don't need to be told about a particular Foo; I can derive that from the Bar itself. In fact, I probably should be validating the routed Foo everywhere I go (so that you couldn't do /foo/150/bar/200, assuming Bar 200 is not assigned to Foo 150). Ultimately, I don't see what this brings me. So, are there any other arguments for or against these two routing schemes?

    Read the article

  • Thumbnails fadein fadeout specific div fade issues

    - by Omikron
    I am using this code to hide and show a div based on which thumbnail you rollover; $(document).ready(function(){ $('div.infodiv').hide(); $(".website_thumbs a").hover( function(){ var name = $(this).attr("name"); $(".infodiv").stop(); $("."+name).fadeIn(); }, function(){ var name = $(this).attr("name"); $("."+name).fadeTo(7000,1).fadeOut(); }); }); The script gets the name attribute from the thumbnail and displays the div with the corresponding class. Each div shares the .infodiv class but also has a class unique to each thumbnail. The functionality is basically where I want it but when you scroll over the thumbnails fast some of the divs get stuck in a kind of half faded-in state and stop working unless i roll over them once - then they slow fade in and they are usable again. I am a bit new to jQuery and would appreciate any help.

    Read the article

  • How to query collections in NHibernate

    - by user305813
    Hi, I have a class: public class User { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IDictionary<string, string> Attributes { get; set; } } and a mapping file: <class name="User" table="Users"> <id name="Id"> <generator class="hilo"/> </id> <property name="Name"/> <map name="Attributes" table="UserAttributes"> <key column="UserId"/> <index column="AttributeName" type="System.String"/> <element column="Attributevalue" type="System.String"/> </map> </class> So now I can add many attributes and values to a User. How can I query those attributes so I can get ie. Get all the users where attributename is "Age" and attribute value is "20" ? I don't want to do this in foreach because I may have millions of users each having its unique attributes. Please help

    Read the article

  • Why is the 'Named Domain Class' tool missing in the DSL Designer category in the toolbox ?

    - by Gishu
    I have the Domain-Specific development with VS DSL Tools book by Cook, Jones, et.all The book and various tutorials online mention a NamedDomainClass tool that should be present in the DSL Designer toolbox. I have installed VS 2010 beta 2 on Win XP - however this tool is missing in the toolbox. I've created a project using the Minimal project template as mentioned in the book. I have 12 tools showing up including the Domain Class tool. I've searched online and apparently no one else has this problem. Can someone confirm that it's missing in VS 2010 Beta 2? If not how can I get it to show up ? Is there any way in which I can add a Domain class instance and tweak it so that it becomes a Named Domain Class? The book mentions that there is some must-be-unique validation and serialization changes that are done by the NamedDomainClass tool. I've tried 'Choose Items' context menu on the DSL Designer category. These tools apparently are added dynamically ; do not show up in the lists on the dialog that comes up.

    Read the article

  • Django Cannot set values on a ManyToManyField which specifies an intermediary model

    - by dana
    i am using a m2m and a through table, and when i was trying to save, my error was: Cannot set values on a ManyToManyField which specifies an intermediary model so, i've modified my code, so that when i save the form, to insert data into the 'through' table too.But now, i'm having another error. (i've bolded the lines where i think i am wrong) i have in models.py: class Classroom(models.Model): user = models.ForeignKey(User, related_name = 'classroom_creator') classname = models.CharField(max_length=140, unique = True) date = models.DateTimeField(auto_now=True) open_class = models.BooleanField(default=True) members = models.ManyToManyField(User,related_name="list of invited members", through = 'Membership') class Membership(models.Model): accept = models.BooleanField(User) date = models.DateTimeField(auto_now = True) classroom = models.ForeignKey(Classroom, related_name = 'classroom_membership') member = models.ForeignKey(User, related_name = 'user_membership') and in def save_classroom(request): if request.method == 'POST': form = ClassroomForm(request.POST, request.FILES, user = request.user) **classroom_instance = Classroom member_instance = Membership** if form.is_valid(): new_obj = form.save(commit=False) new_obj.user = request.user r = Relations.objects.filter(initiated_by = request.user) membership = Membership.objects.create(**classroom = classroom_instance, member = member_instance,date=datetime.datetime.now())** new_obj.save() form.save_m2m() return HttpResponseRedirect('/classroom/classroom_view/{{user}}/') else: form = ClassroomForm(user = request.user) return render_to_response('classroom/classroom_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to initialise okay the classroom_instance and menber_instance.My error os: Cannot assign "": "Membership.classroom" must be a "Classroom" instance. Thanks!

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >