Search Results

Search found 819 results on 33 pages for 'tagged corpus'.

Page 24/33 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • IEnumerable<T> and reflection

    - by Aren B
    Background Working in .NET 2.0 Here, reflecting lists in general. I was originally using t.IsAssignableFrom(typeof(IEnumerable)) to detect if a Property I was traversing supported the IEnumerable Interface. (And thus I could cast the object to it safely) However this code was not evaluating to True when the object is a BindingList<T>. Next I tried to use t.IsSubclassOf(typeof(IEnumerable)) and didn't have any luck either. Code /// <summary> /// Reflects an enumerable (not a list, bad name should be fixed later maybe?) /// </summary> /// <param name="o">The Object the property resides on.</param> /// <param name="p">The Property We're reflecting on</param> /// <param name="rla">The Attribute tagged to this property</param> public void ReflectList(object o, PropertyInfo p, ReflectedListAttribute rla) { Type t = p.PropertyType; //if (t.IsAssignableFrom(typeof(IEnumerable))) if (t.IsSubclassOf(typeof(IEnumerable))) { IEnumerable e = p.GetValue(o, null) as IEnumerable; int count = 0; if (e != null) { foreach (object lo in e) { if (count >= rla.MaxRows) break; ReflectObject(lo, count); count++; } } } } The Intent I want to basically tag lists i want to reflect through with the ReflectedListAttribute and call this function on the properties that has it. (Already Working) Once inside this function, given the object the property resides on, and the PropertyInfo related, get the value of the property, cast it to an IEnumerable (assuming it's possible) and then iterate through each child and call ReflectObject(...) on the child with the count variable.

    Read the article

  • Overwrite clean method in Django Custom Forms

    - by John
    Hi I have wrote a custom widget class AutoCompleteWidget(widgets.TextInput): """ widget to show an autocomplete box which returns a list on nodes available to be tagged """ def render(self, name, value, attrs=None): final_attrs = self.build_attrs(attrs, name=name) if not self.attrs.has_key('id'): final_attrs['id'] = 'id_%s' % name if not value: value = '[]' jquery = u""" <script type="text/javascript"> $("#%s").tokenInput('%s', { hintText: "Enter the word", noResultsText: "No results", prePopulate: %s, searchingText: "Searching..." }); $("body").focus(); </script> """ % (final_attrs['id'], reverse('ajax_autocomplete'), value) output = super(AutoTagWidget, self).render(name, "", attrs) return output + mark_safe(jquery) class MyForm(forms.Form): AutoComplete = forms.CharField(widget=AutoCompleteWidget) this widget uses a jquery function which autocompletes a word based on entries from the database. You can preset its initial values by setting prePopulate to a json string in the form ['name': 'some name', 'id': 'some id'] I do this by setting the inital value of the form field to this json string jquery_string = ['name': 'some name', 'id': 'some id'] form = MyForm(initial={'AutoComplete':jquery_string}) When submitting the form the the value of AutoComplete is returned as a comma seperated list of the selected ids e.g. 12,45,43,66 which if what I want. However if there is an error in the form, for example a required field has not been entered the value of the AutoComplete field is now 12,45,43,66 and not the json string which it requires. What is the best way to solve this. I was thinking about overwriting the clean method in the form class but I'm not sure how to find out if any other element has returned an error. e.g. if forms.errors form.cleaned_date['autocomplete'] = json string return form.cleaned_data Thanks

    Read the article

  • Drupal image management

    - by vian
    Please suggest how should I approach these requirements. What ready-to-use solutions (modules) are best suited to achieve something like this: What I need is an image library that has searchable, tagged images that are already resized when we publish them. If the author searches the library and the image he needs isn’t there, he can upload one and have it added to the index. The important thing is that images in the library can be sorted into three categories: News images, top story images and feature images so that, over time, we don’t end up with hundreds of images crammed into one folder, thus making browsing a pain (and to prevent someone from something like: Searching for a keyword so they can find an image for the news, picking an image, and then seeing it’s 1600X. 1200). Also, I need something which will assemble thumbnail galleries easily. I don’t want to have to go to the image library, get a URL, go back, paste it in, etc. I should be able to pick, say, 8 images and say “create gallery”. How this objective is achieved is flexible, but I am looking for a shortcut to get around assembling screenshot galleries by hand.

    Read the article

  • using ghostscript in server mode to convert pdfs to pngs

    - by emh
    while i am able to convert a specific page of a PDF to a PNG like so: gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -dGraphicsAlphaBits=4 -sOutputFile=gymnastics-20.png -dFirstPage=20 -dLastPage=20 gymnastics.pdf i am wondering if i can somehow use ghostscript's JOBSERVER mode to process several conversions without having to incur the cost of starting up ghostscript each time. from: http://pages.cs.wisc.edu/~ghost/doc/svn/Use.htm -dJOBSERVER Define \004 (^D) to start a new encapsulated job used for compatibility with Adobe PS Interpreters that ordinarily run under a job server. The -dNOOUTERSAVE switch is ignored if -dJOBSERVER is specified since job servers always execute the input PostScript under a save level, although the exitserver operator can be used to escape from the encapsulated job and execute as if the -dNOOUTERSAVE was specified. This also requires that the input be from stdin, otherwise an error will result (Error: /invalidrestore in --restore--). Example usage is: gs ... -dJOBSERVER - < inputfile.ps -or- cat inputfile.ps | gs ... -dJOBSERVER - Note: The ^D does not result in an end-of-file action on stdin as it may on some PostScript printers that rely on TBCP (Tagged Binary Communication Protocol) to cause an out-of-band ^D to signal EOF in a stream input data. This means that direct file actions on stdin such as flushfile and closefile will affect processing of data beyond the ^D in the stream. the idea is to run ghostscript in-process. the script would receive a request for a particular page of a pdf and would use ghostscript to generate the specified image. i'd rather not start up a new ghostscript process every time.

    Read the article

  • Making REST request using LWP::Simple

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • How to call a C# DLL from InstallerClass overrides?

    - by simpleton
    I have a setup and deployment project. In it I have an Installer.cs class, created from the basic Installer Class template. In the Install override, I have this code taken from http://msdn.microsoft.com/en-us/library/d9k65z2d.aspx. public override void Install(IDictionary stateSaver) { base.Install(stateSaver); System.Diagnostics.Process.Start("http://www.microsoft.com"); } Everything works when I install the application, but the problem arises when I try to call anything from an external C# class library that I also wrote. Example: public override void Install(IDictionary stateSaver) { base.Install(stateSaver); System.Diagnostics.Process.Start("http://www.microsoft.com"); MyLibrary.SomeMethod(); } The problem is, I don't believe the library call is actually happening. I tagged the entire process line by line, with EventLog.WriteEvent()s, and each line in the Install override is seemlingly 'hit', putting entries in the event log, but I do not receive any exceptions for the library call SomeMethod(), yet the EventLog entries and other actions inside SomeMethod() are never created nor any of its actions taken. Any ideas?

    Read the article

  • Accessing deleted rows from a DataTable

    - by Ken
    Hello: I have a parent WinForm that has a MyDataTable _dt as a member. The MyDataTable type was created in the "typed dataset" designer tool in Visual Studio 2005 (MyDataTable inherits from DataTable) _dt gets populated from a db via ADO.NET. Based on changes from user interaction in the form, I delete a row from the table like so: _dt.FindBySomeKey(_someKey).Delete(); Later on, _dt is passed by value to a dialog form. From there, I need to scan through all the rows to build a string: foreach (myDataTableRow row in _dt) { sbFilter.Append("'" + row.info + "',"); } The problem is that upon doing this after a delete, the following exception is thrown: DeletedRowInaccessibleException: Deleted row information cannot be accessed through the row. The work around that I am currently using (which feels like a hack) is the following: foreach (myDataTableRow row in _dt) { if (row.RowState != DataRowState.Deleted && row.RowState != DataRowState.Detached) { sbFilter.Append("'" + row.info + "',"); } } My question: Is this the proper way to do this? Why would the foreach loop access rows that have been tagged via the Delete() method??

    Read the article

  • iPhone Landscape FAQ and Solutions

    - by Johannes Rudolph
    There has been a lot of confusion and a set of corresponding set of questions here on SO how iPhone applications with proper handling for Landscape/Portrait mode autorotation can be implemented. It is especially difficult to implement such an application when starting in landscape mode is desired. The most common observed effect are scrambled layouts and areas of the screen where touches are no longer recognized. A simple search for questions tagged iphone and landscape reveals these issues, which occur under certain scenarios: Landscape only iPhone app with multiple nibs: App started in Landscape mode, view from first nib is rendered fine, everything view loaded from a different nib is not displayed correctly. Iphone Landscape mode switching to Portraite mode on loading new controller: Self explanatory iPhone: In landscape-only, after first addSubview, UITableViewController doesn’t rotate properly: Same issue as above. iPhone Landscape-Only Utility-Template Application: Layout errors, controller does not seem to recognize the view should be rotated but displays a clipped portrait view in landscape mode, causing half of the screen to stay blank. presentModalViewController in landscape after portrait viewController: Modal views are not correctly rendered either. A set of different solutions have been presented, some of them including completely custom animation via CoreGraphics, while others build on the observation that the first view controller loaded from the main nib is always displayed correct. I have spent a significant amount of time investigating this issue and finally found a solution that is not only a partial solution but should work under all these circumstances. It is my intend with this CW post to provide sort of a FAQ for others having issues with UIViewControllers in Landscape mode. Please provide feedback and help improve the quality of this Post by incorporating any related observations. Feel free to edit and post other/better answers if you know of any.

    Read the article

  • Generating a static website from a set of content data (possibly with webgen, webby or a similar too

    - by Darel
    My company (an engineering firm) is looking to redesign their website with some dynamic content. We have a nice portfolio of projects that we'd like to present on our site by category. To elaborate, I'd like to have a "Projects Category" menu, where you can choose a sub-project category (such as churches, schools, etc) which links to a page with images of all projects which have been tagged with that category attribute. Clicking on an image would then take you to a detailed page for that project. I have done a good bit of asp and jsp page development, but I've always worked on the front end in an enterprise environment - I've never built a production site from the back end. The advice I've gotten so far is that a full-blown CMS solution would be somewhat overkill, as we won't have a large hit count, and we'll be displaying a few hundred projects at most. One big-picture choice I appear to have - whether to dynamically generate the pages (with asp or jsp) or to use a tool to generate a set of static html pages. The tool would build the menus, project summary pages, and individual project pages based on a set of data I could provide (in the form of a database or text file.) I'm leaning towards trying to use a tool like webgen or webby to statically generate the site due to our current web hosting situation. Any thoughts on which approach is more appropriate? Is webgen or webby capable of doing what I am trying to do? Or can anyone recommend other web authoring tools better equipped to accomplish this? Thanks for any feedback!

    Read the article

  • Template with constant expression: error C2975 with VC++2008

    - by Arman
    Hello, I am trying to use elements of meta programming, but hit the wall with the first trial. I would like to have a comparator structure which can be used as following: intersect_by<ID>(L1.data, L2.data, "By ID: "); intersect_by<IDf>(L1.data, L2.data, "By IDf: "); Where: struct ID{};// Tag used for original IDs struct IDf{};// Tag used for the file position //following Boost.MultiIndex examples template<typename Tag,typename MultiIndexContainer> void intersect_by( const MultiIndexContainer& L1,const MultiIndexContainer& L2,std::string msg, Tag* =0 /* fixes a MSVC++ 6.0 bug with implicit template function parms / ) { / obtain a reference to the index tagged by Tag */ const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L1_ID_index= get<Tag>(L1); const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L2_ID_index= get<Tag>(L2); std::set_intersection( L1_ID_index.begin(), L1_ID_index.end(), L2_ID_index.begin(), L2_ID_index.end(), std::inserter(s, s.begin()), strComparator() // Here I get the C2975 error ); } template<int N> struct strComparator; template<> struct strComparator<0>{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.ID struct strComparator<1{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.IDf }; What I am missing? kind regards Arman.

    Read the article

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • Fluent NHibernate automap a HasManyToMany using a generic type

    - by zulkamal
    I have a bunch of domain entities that can be keyword tagged (a Tag is also an entity.) I want to do a normal many-to-many (Tag - TagReview <- Review) table relationship but I don't want to have to create a new concrete relationship on both the Entity and Tag every single time I add a new entity. I was hoping to do a generic based Tag and do this: // Tag public class Tag<T> { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<T> Entities { get; set; } public Tag() { Entities = new List<T>(); } } // Review public class Review { public virtual string Id { get; private set; } public virtual string Title { get; set; } public virtual string Content { get; set; } public virtual IList<Tag<Review>> Tags { get; set; } public Review() { Tags = new List<Tag<Review>>(); } } Unfortunately I get an exception: ----> System.ArgumentException : Cannot create an instance of FluentNHibernate.Automapping.AutoMapping`1[Example.Entities.Tag`1[T]] because Type.ContainsGenericParameters is true. I anticipate there will be maybe 5-10 entities so mapping normally would be ok but is there a way to do something like this?

    Read the article

  • How to output JSON from within Django and call it with jQuery from a cross domain?

    - by Emre Sevinç
    For a bookmarklet project I'm trying to get JSON data using jQuery from my server (which is naturally on a different domain) running a Django powered system. According to jQuery docs: "As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, which can be done like so: "myurl?callback=?". jQuery automatically replaces the ? with the correct method name to call, calling your specified callback." And for example I can test it successfully in my Firebug console using the following snippet: $.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any& format=json&jsoncallback=?", function(data){ alert(data.title); }); It prints the returned data in an alert window, e.g. 'Recent uploads tagged cat'. However when I try the similar code with my server I don't get anything at all: $.getJSON("http://mydjango.yafz.org/randomTest?jsoncallback=?", function(data){ alert(data.title); }); There are no alert windows and the Firebug status bar says "Transferring data from mydjango.yafz.org..." and keeps on waiting. On the server side I have this: def randomTest(request): somelist = ['title', 'This is a constant result'] encoded = json.dumps(somelist) response = HttpResponse(encoded, mimetype = "application/json") return response I also tried this without any success: def randomTest(request): if request.is_ajax() == True: req = {} req ['title'] = 'This is a constant result.' response = json.dumps(req) return HttpResponse(response, mimetype = "application/json") So to cut a long story short: what is the suggested method of returning a piece of data from within a Django view and retrieve it using jQuery in a cross domain fashion? What are my mistakes above?

    Read the article

  • BindingList<T> and reflection!

    - by Aren B
    Background Working in .NET 2.0 Here, reflecting lists in general. I was originally using t.IsAssignableFrom(typeof(IEnumerable)) to detect if a Property I was traversing supported the IEnumerable Interface. (And thus I could cast the object to it safely) However this code was not evaluating to True when the object is a BindingList<T>. Next I tried to use t.IsSubclassOf(typeof(IEnumerable)) and didn't have any luck either. Code /// <summary> /// Reflects an enumerable (not a list, bad name should be fixed later maybe?) /// </summary> /// <param name="o">The Object the property resides on.</param> /// <param name="p">The Property We're reflecting on</param> /// <param name="rla">The Attribute tagged to this property</param> public void ReflectList(object o, PropertyInfo p, ReflectedListAttribute rla) { Type t = p.PropertyType; //if (t.IsAssignableFrom(typeof(IEnumerable))) if (t.IsSubclassOf(typeof(IEnumerable))) { IEnumerable e = p.GetValue(o, null) as IEnumerable; int count = 0; if (e != null) { foreach (object lo in e) { if (count >= rla.MaxRows) break; ReflectObject(lo, count); count++; } } } } The Intent I want to basically tag lists i want to reflect through with the ReflectedListAttribute and call this function on the properties that has it. (Already Working) Once inside this function, given the object the property resides on, and the PropertyInfo related, get the value of the property, cast it to an IEnumerable (assuming it's possible) and then iterate through each child and call ReflectObject(...) on the child with the count variable.

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • Pronunciation of programming structures (particularly in c#)

    - by Andrzej Nosal
    As a non-English speaking person I often have problems pronouncing certain programming structures and abbreviations. I've been watching some video tutorials and listening to podcasts as well, though I couldn't catch them all. My question is what is the common or correct pronunciation of the following code snippets? Generics, like IEnumerable<int> or in a method void Swap<T>(T lhs, T rhs) Collections indexing and indexer access e.g. garage[i], rectangular arrays myArray[2,1] or jagged[1][2][3] Lambda operator =>, e.g. in a where extension method .Where(animal => animal.Color == Color.Brown) or in an anonymous method () => { return false;} Inheritance class Derived : Base (extends?) class SomeClass : IDisposable (implements?) Arithemtic operators += -= *= /= %= ! Are += and -= pronounced the same for events? Collections initializers new int[] { 4, 5, 8, 9, 12, 13, 16, 17 }; Casting MyEnum foo = (MyEnum)(int)yourFloat; (as?) Nullables DateTime? dt = new DateTime?(); I tagged the question with C# as some of them are specific to C# only.

    Read the article

  • svn dev cycle. howto lots minor "features" pending for approval.

    - by Julian Davchev
    Hi I've read similar questions regarding that but still feel the need to ask a question. I have scenario where we have lots of tiny "features" pending for approval. I generally see two approaches. 1.Keep trunk solid and have tons of branches for each tiny "feature". Basically every new thingy is a branch. Cons: - Might become nightmare to support so many branches no matter how small a change. Keeping all branches in sync etc etc. - Worst con I see in this is setup of test system so one can easily examine changes to approve (basically need to support all branches which seems insane). Pros: - Seemningly easy once approved a branch to be merged back to trunk and new release to be tagged and deployed. 2.For big features a branch is released and for small changes all goes in trunk(relatively stable) directly. Pros: - Easier to set test system as most of the time all will be directly visible. For big features should be easy to maintain separate branch on test. Cons: - Don't really see how release will go. I will not be able to basically release one part of trunk This would involve cherrypicking which is crazy to follow. Other approach is I just enforce that after some time (a week or so) all small features need to be approved so they can deployed before giving new tasks. I just create release branch and either all or none of small features are going live. This will be some fun discussion with head people. I guess having lots of small pending stuff is very problematic to follow technically.

    Read the article

  • ASP.NET MVC application architecture "guidelines"

    - by Joe Future
    I'm looking for some feedback on my ASP.NET MVC based CMS application architecture. Domain Model - depends on nothing but the System classes to define types. For now, mostly anemic. Repository Layer - abstracted data access, only called by the services layer Services Layer - performs business logic on domain models. Exposes view models to the controllers. ViewModelMapper - service that translates back and forth between view models and domain models Controllers - super thin "traffic cop" style functionality that interacts with the service layer and only talks in terms of view models, never domain models My domain model is mostly used as data transfer (DTO) objects and has minimal logic at the moment. I'm finding this is nice because it depends on nothing (not even classes in the services layer). The services layer is a bit tricky... I only want the controllers to have access to viewmodels for ease of GUI programming. However, some of the services need to talk to each other. For example, I have an eventing service that notifies other listener services when content is tagged, when blog posts are created, etc. Currently, the methods that take domain models as inputs or return them are marked internal so they can't be used by the controllers. Sounds like overkill? Not enough abstraction? I'm mainly doing this as a learning exercise in being strict about architecture, not for an actual product, so please no feedback along the lines of "right depends on what you want to do". thanks! Jason

    Read the article

  • HATEOAS - Discovery and URI Templating

    - by Paul Kirby
    I'm designing a HATEOAS API for internal data at my company, but have been having troubles with the discovery of links. Consider the following set of steps for someone to retrieve information about a specific employee in this system: User sends GET to http://coredata/ to get all available resources, returns a number of links including one tagged as rel = "http://coredata/rels/employees" User follows HREF on the rel from the first request, performing a GET at (for example) http://coredata/employees The data returned from this last call is my conundrum and a situation where I've heard mixed suggestions. Here are some of them: That GET will return all employees (with perhaps truncated data), and the client would be responsible for picking the one it wants from that list. That GET would return a number of URI templated links describing how to query / get one employee / get all employees. Something like: "_links": { "http://coredata/rels/employees#RetrieveOne": { "href": "http://coredata/employees/{id}" }, "http://coredata/rels/employees#Query": { "href": "http://coredata/employees{?login,firstName,lastName}" }, "http://coredata/rels/employees#All": { "href": "http://coredata/employees/all" } } I'm a little stuck here with what remains closest to HATEOAS. For option 1, I really do not want to make my clients retrieve all employees every time for the sake of navigation, but I can see how using URI templating in example two introduces some out-of-band knowledge. My other thought was to use the RetrieveOne, Query, and All operations as my cool URLs, but that seems to violate the concept that you should be able to navigate to the resources you want from one base URI. Has anyone else managed to come up with a good way to handle this? Navigation is dead simple once you've retrieved one resource or a set of resources, but it seems very difficult to use for discovery.

    Read the article

  • Warshall Algorithm Logic - Stuck

    - by stan
    I am trying to use this logic to understand what is going on with the adjacency matrix, but i am massivley confused where it says about interspacing for a b c d ..... Could anyone explain what is going on here? Thank you (tagged as java as its the language this was demonstarted to us in, so if anyone posted any code examples they could see it was in that language) http://compprog.wordpress.com/2007/11/15/all-sources-shortest-path-the-floyd-warshall-algorithm/ Here is the code: for (k = 0; k < n; ++k) { for (i = 0; i < n; ++i) for (j = 0; j < n; ++j) /* If i and j are different nodes and if the paths between i and k and between k and j exist, do */ if ((dist[i][k] * dist[k][j] != 0) && (i != j)) /* See if you can't get a shorter path between i and j by interspacing k somewhere along the current path */ if ((dist[i][k] + dist[k][j] < dist[i][j]) || (dist[i][j] == 0)) dist[i][j] = dist[i][k] + dist[k][j];

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • How to prevent PHP variables from being arrays?

    - by MJB
    I think that (the title) is the problem I am having. I set up a MySQL connection, I read an XML file, and then I insert those values into a table by looping through the elements. The problem is, instead of inserting only 1 record, sometimes I insert 2 or 3 or 4. It seems to depend on the previous values I have read. I think I am reinitializing the variables, but I guess I am missing something -- hopefully something simple. Here is my code. I originally had about 20 columns, but I shortened the included version to make it easier to read. $ctr = 0; $sql = "insert into csd (id,type,nickname,hostname,username,password) ". "values (?,?,?,?,?,?)"; $cur = $db->prepare($sql); for ($ctr = 0; $ctr < $expected_count; $ctr++) { list ( $lbl, $type, $nickname, $hostname, $username, $password) = ""; $bind_vars = array(); $lbl = "csd_{$ctr}"; $type = $ref->itm->csds->$lbl->type; $nickname = $ref->itm->csds->$lbl->nickname; $hostname = $ref->itm->csds->$lbl->hostname; $username = $ref->itm->csds->$lbl->username; $password = $ref->itm->csds->$lbl->password; $bind_vars = array($id,$type,$nickname,$hostname,$username,$password); $res = $db->execute($cur, $bind_vars); # this is a separate function which works, but which only # does SELECTS and cannot be the problem. I include it because I # want to count the total rows. printf ("%d CSDs on that ITEM now.\n", CountCSDs($id_to_sync)); } P.S. I also tagged this SimpleXML because that is how I am reading the file, though that code is not included above. It looks like this: $Ref = simplexml_load_file($file);

    Read the article

  • Trying to replace contents of a Div, with no luck

    - by bluedaniel
    Ive tried to use the Dom model with no bloody luck, getElementByID just doesnt work for me. I loathe to resort to a regex but not sure what else to do. The idea is to replace a <div id="content_div"> all sorts </div> with a new <div id="content_div"> NEW ALL SORTS HERE </div> and keep anything that was before or after it in the string. The string is a partial HTML string and more specifically out of the wordpress Posts DB. Any ideas? UPDATE: I tagged this question PHP but probably should of mentioned Im looking for a PHP solution only. Update: Code Example $content = ($wpdb->get_var( "SELECT `post_content` FROM $wpdb->posts WHERE ID = {$article[post_id]}" )); $doc = new DOMDocument(); $doc->validateOnParse = true; $doc->loadHTMLFile($content); $element = $doc->getElementById('div_to_edit'); So Ive tried a whole lot of code and this is what Ive got so far, probably not right but Ive been hacking at it for a little while now.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >