Search Results

Search found 4116 results on 165 pages for 'baron throw'.

Page 37/165 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Where is a list of SharePoint 2010 wiki improvements?

    - by Perry
    Do you know where there is a list of SharePoint 2010 wiki improvements? I'm really hoping they added something equivalent to wikipedia categories -- so that, e.g., we can throw a category marker such as [Category:TODO] into a page, and have it automagically show up on a list of todo items as long as that category is present.

    Read the article

  • Which folder lock application will meet this requirement?

    - by user1540992
    I want any third party(either commercial or Open source) folder lock application that protects file with password without hiding the files. The locked folder should be displayed to user as password protected folder, when user try to open the folder, it should ask password. When the user enters correct password it should open, otherwise it should throw an error. Is there any application with this quality? I searched a lot, but I could not find any.

    Read the article

  • Sharepoint : Access denied when editing a page (because of page layout) or list item

    - by tinky05
    I'm logged in as the System Account, so it's probably not a "real access denied"! What I've done : - A custom master page - A custom page layout from a custom content type (with custom fields) If I add a custom field (aka "content field" in the tools in SPD) in my page layout, I get an access denied when I try to edit a page that comes from that page layout. So, for example, if I add in my page layout this line in a "asp:content" tag : I get an access denied. If I remove it, everyting is fine. (the field "test" is a field that comes from the content type). Any idea? UPDATE Well, I tried in a blank site and it worked fine, so there must be something wrong with my web application :( UPDATE #2 Looks like this line in the master page gives me the access denied : <SharePoint:DelegateControl runat="server" ControlId="PublishingConsole" Visible="false" PrefixHtml="&lt;tr&gt;&lt;td colspan=&quot;0&quot; id=&quot;mpdmconsole&quot; class=&quot;s2i-consolemptablerow&quot;&gt;" SuffixHtml="&lt;/td&gt;&lt;/tr&gt;"></SharePoint:DelegateControl> UPDATE #3 I Found http://odole.wordpress.com/2009/01/30/access-denied-error-message-while-editing-properties-of-any-document-in-a-moss-document-library/ Looks like a similar issue. But our Sharepoint versions are with the latest updates. I'll try to use the code that's supposed to fix the lists and post another update. ** UPDATE #4** OK... I tried the code that I found on the page above (see link) and it seems to fix the thing. I haven't tested the solution at 100% but so far, so good. Here's the code I made for a feature receiver (I used the code posted from the link above) : using System; using System.Collections.Generic; using System.Text; using Microsoft.SharePoint; using System.Xml; namespace MyWebsite.FixAccessDenied { class FixAccessDenied : SPFeatureReceiver { public override void FeatureActivated(SPFeatureReceiverProperties properties) { FixWebField(SPContext.Current.Web); } public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } public override void FeatureInstalled(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } public override void FeatureUninstalling(SPFeatureReceiverProperties properties) { //throw new Exception("The method or operation is not implemented."); } static void FixWebField(SPWeb currentWeb) { string RenderXMLPattenAttribute = "RenderXMLUsingPattern"; SPSite site = new SPSite(currentWeb.Url); SPWeb web = site.OpenWeb(); web.AllowUnsafeUpdates = true; web.Update(); SPField f = web.Fields.GetFieldByInternalName("PermMask"); string s = f.SchemaXml; Console.WriteLine("schemaXml before: " + s); XmlDocument xd = new XmlDocument(); xd.LoadXml(s); XmlElement xe = xd.DocumentElement; if (xe.Attributes[RenderXMLPattenAttribute] == null) { XmlAttribute attr = xd.CreateAttribute(RenderXMLPattenAttribute); attr.Value = "TRUE"; xe.Attributes.Append(attr); } string strXml = xe.OuterXml; Console.WriteLine("schemaXml after: " + strXml); f.SchemaXml = strXml; foreach (SPWeb sites in site.AllWebs) { FixField(sites.Url); } } static void FixField(string weburl) { string RenderXMLPattenAttribute = "RenderXMLUsingPattern"; SPSite site = new SPSite(weburl); SPWeb web = site.OpenWeb(); web.AllowUnsafeUpdates = true; web.Update(); System.Collections.Generic.IList<Guid> guidArrayList = new System.Collections.Generic.List<Guid>(); foreach (SPList list in web.Lists) { guidArrayList.Add(list.ID); } foreach (Guid guid in guidArrayList) { SPList list = web.Lists[guid]; SPField f = list.Fields.GetFieldByInternalName("PermMask"); string s = f.SchemaXml; Console.WriteLine("schemaXml before: " + s); XmlDocument xd = new XmlDocument(); xd.LoadXml(s); XmlElement xe = xd.DocumentElement; if (xe.Attributes[RenderXMLPattenAttribute] == null) { XmlAttribute attr = xd.CreateAttribute(RenderXMLPattenAttribute); attr.Value = "TRUE"; xe.Attributes.Append(attr); } string strXml = xe.OuterXml; Console.WriteLine("schemaXml after: " + strXml); f.SchemaXml = strXml; } } } } Just put that code as a Feature Receiver, and activate it at the root site, it should loop trough all the subsites and fix the lists. SUMMARY You get an ACCESS DENIED when editing a PAGE or an ITEM You still get the error even if you're logged in as the Super Admin of the f****in world (sorry, I spent 3 days on that bug) For me, it happened after an import from another site definition (a cmp file) Actually, it's supposed to be a known bug and it's supposed to be fixed since February 2009, but it looks like it's not. The code I posted above should fix the thing.

    Read the article

  • Is this fix to "PostSharp complains about CA1800:DoNotCastUnnecessarily" the best one?

    - by cad
    This question is about "is" and "as" in casting and about CA1800 PostSharp rule. I want to know if the solution I thought is the best one possible or if it have any problem that I can't see. I have this code (named OriginaL Code and reduced to the minimum relevant). The function ValidateSubscriptionLicenceProducts try to validate a SubscriptionLicence (that could be of 3 types: Standard,Credit and TimeLimited ) by casting it and checking later some stuff (in //Do Whatever). PostSharp complains about CA1800:DoNotCastUnnecessarily. The reason is that I am casting two times the same object to the same type. This code in best case will cast 2 times (if it is a StandardLicence) and in worst case 4 times (If it is a TimeLimited Licence). I know is possible to invalidate rule (it was my first approach), as there is no big impact in performance here, but I am trying a best approach. //Version Original Code //Min 2 casts, max 4 casts //PostSharp Complains about CA1800:DoNotCastUnnecessarily private void ValidateSubscriptionLicenceProducts(SubscriptionLicence licence) { if (licence is StandardSubscriptionLicence) { // All products must have the same products purchased List<StandardSubscriptionLicenceProduct> standardProducts = ((StandardSubscriptionLicence)licence).SubscribedProducts; //Do whatever } else if (licence is CreditSubscriptionLicence) { // All products must have a valid Credit entitlement & Credit interval List<CreditSubscriptionLicenceProduct> creditProducts = ((CreditSubscriptionLicence)licence).SubscribedProducts; //Do whatever } else if (licence is TimeLimitedSubscriptionLicence) { // All products must have a valid Time entitlement // All products must have a valid Credit entitlement & Credit interval List<TimeLimitedSubscriptionLicenceProduct> creditProducts = ((TimeLimitedSubscriptionLicence)licence).SubscribedProducts; //Do whatever } else throw new InvalidSubscriptionLicenceException("Invalid Licence type"); //More code... } This is Improved1 version using "as". Do not complain about CA1800 but the problem is that it will cast always 3 times (if in the future we have 30 or 40 types of licences it could perform bad) //Version Improve 1 //Minimum 3 casts, maximum 3 casts private void ValidateSubscriptionLicenceProducts(SubscriptionLicence licence) { StandardSubscriptionLicence standardLicence = Slicence as StandardSubscriptionLicence; CreditSubscriptionLicence creditLicence = Clicence as CreditSubscriptionLicence; TimeLimitedSubscriptionLicence timeLicence = Tlicence as TimeLimitedSubscriptionLicence; if (Slicence == null) { // All products must have the same products purchased List<StandardSubscriptionLicenceProduct> standardProducts = Slicence.SubscribedProducts; //Do whatever } else if (Clicence == null) { // All products must have a valid Credit entitlement & Credit interval List<CreditSubscriptionLicenceProduct> creditProducts = Clicence.SubscribedProducts; //Do whatever } else if (Tlicence == null) { // All products must have a valid Time entitlement // All products must have a valid Credit entitlement & Credit interval List<TimeLimitedSubscriptionLicenceProduct> creditProducts = Tlicence.SubscribedProducts; //Do whatever } else throw new InvalidSubscriptionLicenceException("Invalid Licence type"); //More code... } But later I thought in a best one. This is the final version I am using. //Version Improve 2 // Min 1 cast, Max 3 Casts // Do not complain about CA1800:DoNotCastUnnecessarily private void ValidateSubscriptionLicenceProducts(SubscriptionLicence licence) { StandardSubscriptionLicence standardLicence = null; CreditSubscriptionLicence creditLicence = null; TimeLimitedSubscriptionLicence timeLicence = null; if (StandardSubscriptionLicence.TryParse(licence, out standardLicence)) { // All products must have the same products purchased List<StandardSubscriptionLicenceProduct> standardProducts = standardLicence.SubscribedProducts; //Do whatever } else if (CreditSubscriptionLicence.TryParse(licence, out creditLicence)) { // All products must have a valid Credit entitlement & Credit interval List<CreditSubscriptionLicenceProduct> creditProducts = creditLicence.SubscribedProducts; //Do whatever } else if (TimeLimitedSubscriptionLicence.TryParse(licence, out timeLicence)) { // All products must have a valid Time entitlement List<TimeLimitedSubscriptionLicenceProduct> timeProducts = timeLicence.SubscribedProducts; //Do whatever } else throw new InvalidSubscriptionLicenceException("Invalid Licence type"); //More code... } //Example of TryParse in CreditSubscriptionLicence public static bool TryParse(SubscriptionLicence baseLicence, out CreditSubscriptionLicence creditLicence) { creditLicence = baseLicence as CreditSubscriptionLicence; if (creditLicence != null) return true; else return false; } It requires a change in the classes StandardSubscriptionLicence, CreditSubscriptionLicence and TimeLimitedSubscriptionLicence to have a "tryparse" method (copied below in the code). This version I think it will cast as minimum only once and as maximum three. What do you think about improve 2? Is there a best way of doing it?

    Read the article

  • How do I implement a remove by index method for a singly linked list in Java?

    - by Lars Flyger
    Hi, I'm a student in a programming class, and I need some help with this code I've written. So far I've written an entire linked list class (seen below), yet for some reason the "removeByIndex" method won't work. I can't seem to figure out why, the logic seems sound to me. Is there some problem I don't know about? public class List<T> { //private sub-class Link private class Link { private T value; private Link next; //constructors of Link: public Link (T val) { this.value = val; this.next = null; } public Link (T val, Link next) { this.value = val; this.next = next; } @SuppressWarnings("unused") public T getValue() { return value; } } private static final Exception NoSuchElementException = null; private static final Exception IndexOutOfBoundsException = null; private Link chain = null; //constructors of List: public List() { this.chain = null; } //methods of List: /** * Preconditions: none * Postconditions: returns true if list is empty */ public boolean isEmpty() { return this.chain == null; } /** * Preconditions: none * Postconditions: A new Link is added via add-aux * @param element */ public void add(T element) { this.add_aux(element, this.chain); } /** * Preconditions: none * Postconditions: A new Link is added to the current chain * @param element * @param chain */ private void add_aux(T element, Link chain) { if (chain == null) { //if chain is null set chain to a new Link with a value of //element this.chain = new Link(element); } else if (chain.next != null) { //if chain.next is not null, go to next item in chain and //try //to add element add_aux(element, chain.next); } else { //if chain.next is null, set chain.next equal to a new Link //with value of element. chain.next = new Link(element); } } /** * Preconditions: none * Postconditions: returns the link at the defined index via nthlink_aux * @param index * @return */ private Link nthLink (int index) { return nthLink_aux(index, this.chain); } /** * Preconditions: none * Postconditions: returns the link at the defined index in the specified *chain * @param i * @param c * @return */ private Link nthLink_aux (int i, Link c) { if (i == 0) { return c; } else return nthLink_aux(i-1, c.next); } /** * Preconditions: the specified element is present in the list * Postconditions: the specified element is removed from the list * @param element * @throws Exception */ public void removeElement(T element) throws Exception { if (chain == null) { throw NoSuchElementException; } //while chain's next is not null and the value of chain.next is not //equal to element, //set chain equal to chain.next //use this iteration to go through the linked list. else while ((chain.next != null) && !(chain.next.value.equals(element))){ Link testlink = chain.next; if (testlink.next.value.equals(element)) { //if chain.next is equal to element, bypass the //element. chain.next.next = chain.next.next.next; } else if (testlink.next == null) { throw NoSuchElementException; } } } /** * Preconditions: none * Postsconditions: the Link at the specified index is removed * @param index * @throws Exception */ public void removeByIndex(int index) throws Exception { if (index == 0) { //if index is 0, set chain equal to chain.next chain = chain.next; } else if (index > 0) { Link target = nthLink(index); while (target != null) { if (target.next != null) { target = target.next; } //if target.next is null, set target to null else { target = null; } } return; } else throw IndexOutOfBoundsException; } /** * Preconditions: none * Postconditions: the specified link's value is printed * @param link */ public void printLink (Link link) { if(link != null) { System.out.println(link.value.toString()); } } /** * Preconditions: none * Postconditions: all of the links' values in the list are printed. */ public void print() { //copy chain to a new variable Link head = this.chain; //while head is not null while (!(head == null)) { //print the current link this.printLink(head); //set head equal to the next link head = head.next; } } /** * Preconditions: none * Postconditions: The chain is set to null */ public void clear() { this.chain = null; } /** * Preconditions: none * Postconditions: Places the defined link at the defined index of the list * @param index * @param val */ public void splice(int index, T val) { //create a new link with value equal to val Link spliced = new Link(val); if (index <= 0) { //copy chain Link copy = chain; //set chain equal to spliced chain = spliced; //set chain.next equal to copy chain.next = copy; } else if (index > 0) { //create a target link equal to the link before the index Link target = nthLink(index - 1); //set the target's next equal to a new link with a next //equal to the target's old next target.next = new Link(val, target.next); } } /** * Preconditions: none * Postconditions: Check to see if element is in the list, returns true * if it is and false if it isn't * @param element * @return */ public boolean Search(T element) { if (chain == null) { //return false if chain is null return false; } //while chain's next is not null and the value of chain.next is not //equal to element, //set chain equal to chain.next //use this iteration to go through the linked list. else while ((chain.next != null) && !(chain.next.value.equals(element))) { Link testlink = chain.next; if (testlink.next.value.equals(element)) { //if chain.next is equal to element, return true return true; } else if (testlink.next == null) { return false; } } return false; } /** * Preconditions: none * Postconditions: order of the links in the list is reversed. */ public void reverse() { //copy chain Link current = chain; //set chain equal to null chain = null; while (current != null) { Link save = current; current = current.next; save.next = chain; chain = save; } } }'

    Read the article

  • unable to update gridview

    - by bhakti
    Please help ,i have added update/edit command button in gridview so to update data in my sql server database but am unable to do it. Data is not updated in database . ======code for onrowupdate======================================== protected void gRowUpdate(object sender, GridViewUpdateEventArgs e) { Books b = null; b = new Books(); DataTable dt=null; GridView g = (GridView)sender; try { dt=new DataTable(); b = new Books(); b.author = Convert.ToString(g.Rows[e.RowIndex].FindControl("Author")); b.bookID = Convert.ToInt32(g.Rows[e.RowIndex].FindControl("BookID")); b.title = Convert.ToString(g.Rows[e.RowIndex].FindControl("Title")); b.price = Convert.ToDouble(g.Rows[e.RowIndex].FindControl("Price")); // b.rec = Convert.ToString(g.Rows[e.RowIndex].FindControl("Date_of_reciept")); b.ed = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.bill = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.cre_by = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.src = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.pages = Convert.ToInt32(g.Rows[e.RowIndex].FindControl("Edition")); b.pub = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.mod_by = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.remark = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); // b.year = Convert.ToString(g.Rows[e.RowIndex].FindControl("Edition")); b.updatebook(b); g.EditIndex = -1; dt = b.GetAllBooks(); g.DataSource = dt; g.DataBind(); } catch (Exception ex) { throw (ex); } finally { b = null; } } ===================My stored procedure for update book able to update database by exec in sqlserver mgmt studio========================== set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go ALTER PROCEDURE [dbo].[usp_updatebook] @bookid bigint, @author varchar(50), @title varchar(50), @price bigint, @src_equisition varchar(50), @bill_no varchar(50), @publisher varchar(50), @pages bigint, @remark varchar(50), @edition varchar(50), @created_by varchar(50), @modified_by varchar(50) /*@date_of_reciept datetime, @year_of_publication datetime*/ AS declare @modified_on datetime set @modified_on=getdate() UPDATE books SET author=@author, title=@title, price=@price, src_equisition=@src_equisition, bill_no=@bill_no, publisher=@publisher, /*Date_of_reciept=@date_of_reciept,*/ pages=@pages, remark=@remark, edition=@edition, /*Year_of_publication=@year_of_publication,*/ created_by=@created_by, modified_on=@modified_on, modified_by=@modified_by WHERE bookid=@bookid ========================class library function for update==================== public void updatebook(Books b) { DataAccess dbAccess = null; SqlCommand cmd = null; try { dbAccess = new DataAccess(); cmd = dbAccess.GetSQLCommand("usp_updatebook", CommandType.StoredProcedure); cmd.Parameters.Add("@bookid", SqlDbType.VarChar, 50).Value = b.bookID; cmd.Parameters.Add("@author", SqlDbType.VarChar, 50).Value = b.author; cmd.Parameters.Add("@title", SqlDbType.VarChar, 50).Value = b.title; cmd.Parameters.Add("@price", SqlDbType.Money).Value = b.price; cmd.Parameters.Add("@publisher", SqlDbType.VarChar, 50).Value = b.pub; // cmd.Parameters.Add("@year_of_publication", SqlDbType.DateTime).Value =Convert.ToDateTime( b.year); cmd.Parameters.Add("@src_equisition", SqlDbType.VarChar, 50).Value = b.src; cmd.Parameters.Add("@bill_no", SqlDbType.VarChar, 50).Value = b.bill; cmd.Parameters.Add("@remark", SqlDbType.VarChar, 50).Value = b.remark; cmd.Parameters.Add("@pages", SqlDbType.Int).Value = b.pages; cmd.Parameters.Add("@edition", SqlDbType.VarChar, 50).Value = b.ed; // cmd.Parameters.Add("@date_of_reciept", SqlDbType.DateTime).Value = Convert.ToDateTime(b.rec); // cmd.Parameters.Add("@created_on", SqlDbType.DateTime).Value = Convert.ToDateTime(b.cre_on); cmd.Parameters.Add("@created_by", SqlDbType.VarChar, 50).Value = b.cre_by; //cmd.Parameters.Add("@modified_on", SqlDbType.DateTime).Value = Convert.ToDateTime(b.mod_on); cmd.Parameters.Add("@modified_by", SqlDbType.VarChar, 50).Value = b.mod_by; cmd.ExecuteNonQuery(); } catch (Exception ex) { throw (ex); } finally { if (cmd.Connection != null && cmd.Connection.State == ConnectionState.Open) cmd.Connection.Close(); dbAccess = null; cmd = null; } } I have also tried to do update by following way protected void gv1_updating(object sender, GridViewUpdateEventArgs e) { GridView g = (GridView)sender; abc a = new abc(); DataTable dt = new DataTable(); try { a.cd_Id = Convert.ToInt32(g.DataKeys[e.RowIndex].Values[0].ToString()); //TextBox b = (TextBox)g.Rows[e.RowIndex].Cells[0].FindControl("cd_id"); TextBox c = (TextBox)g.Rows[e.RowIndex].Cells[2].FindControl("cd_name"); TextBox d = (TextBox)g.Rows[e.RowIndex].Cells[3].FindControl("version"); TextBox f = (TextBox)g.Rows[e.RowIndex].Cells[4].FindControl("company"); TextBox h = (TextBox)g.Rows[e.RowIndex].Cells[6].FindControl("created_by"); TextBox i = (TextBox)g.Rows[e.RowIndex].Cells[8].FindControl("modified_by"); //a.cd_Id = Convert.ToInt32(b.Text); a.cd_name = c.Text; a.ver = d.Text; a.comp = f.Text; a.cre_by = h.Text; a.mod_by = i.Text; a.updateDigi(a); g.EditIndex = -1; dt = a.GetAllDigi(); g.DataSource = dt; g.DataBind(); } catch(Exception ex) { throw (ex); } finally { dt = null; a = null; g = null; } } =================== but have error of Index out of range exception========= please do reply,thanxs in advance

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Creating a file upload template in Doctrine ORM

    - by balupton
    Hey all. I'm using Doctrine 1.2 as my ORM for a Zend Framework Project. I have defined the following Model for a File. File: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true Now here is the File Model php class (just skim through for now): <?php /** * File * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6365 2009-09-15 18:22:38Z jwage $ */ class File extends BaseFile { public function setUp ( ) { $this->hasMutator('file', 'setFile'); parent::setUp(); } public function setFile ( $file ) { global $Application; // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // Check the file if ( !empty($file['error']) ) { $error = $file['error']; switch ( $file['error'] ) { case UPLOAD_ERR_INI_SIZE : $error = 'ini_size'; break; case UPLOAD_ERR_FORM_SIZE : $error = 'form_size'; break; case UPLOAD_ERR_PARTIAL : $error = 'partial'; break; case UPLOAD_ERR_NO_FILE : $error = 'no_file'; break; case UPLOAD_ERR_NO_TMP_DIR : $error = 'no_tmp_dir'; break; case UPLOAD_ERR_CANT_WRITE : $error = 'cant_write'; break; default : $error = 'unknown'; break; } throw new Doctrine_Exception('error-application-file-' . $error); return false; } if ( empty($file['tmp_name']) || !is_uploaded_file($file['tmp_name']) ) { throw new Doctrine_Exception('error-application-file-invalid'); return false; } // Prepare config $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; // Prepare file $filename = $file['name']; $file_old_path = $file['tmp_name']; $file_new_path = $file_upload_path . $filename; $exist_attempt = 0; while ( file_exists($file_new_path) ) { // File already exists // Pump exist attempts ++$exist_attempt; // Add the attempt to the end of the file $file_new_path = $file_upload_path . get_filename($filename,false) . $exist_attempt . get_extension($filename); } // Move file $success = move_uploaded_file($file_old_path, $file_new_path); if ( !$success ) { throw new Doctrine_Exception('Unable to upload the file.'); return false; } // Secure $file_path = realpath($file_new_path); $file_size = filesize($file_path); $file_mimetype = get_mime_type($file_path); $file_type = get_filetype($file_path); // Apply $this->path = $file_path; $this->size = $file_size; $this->mimetype = $file_mimetype; $this->type = $file_type; // Apply: Image if ( $file_type === 'image' ) { $image_dimensions = image_dimensions($file_path); if ( !empty($image_dimensions) ) { // It is not a image we can modify $this->width = 0; $this->height = 0; } else { $this->width = $image_dimensions['width']; $this->height = $image_dimensions['height']; } } // Done return true; } /** * Download the File * @return */ public function download ( ) { global $Application; // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Output result and download become_file_download($file_path, null, null); die(); } public function postDelete ( $Event ) { global $Application; // Prepare $Invoker = $Event->getInvoker(); // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Delete the file unlink($file_path); // Done return true; } } What I am hoping to accomplish is so that the above custom functionality within my model file can be turned into a validator, template, or something along the lines. So hopefully I can do something like: File: actAs: BalFile: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true I'm hoping for a validator so that say if I do $File->setFile($_FILE['uploaded_file']); It will provide a validation error, except in all the doctrine documentation it has little on custom validators, especially in the contect of "virtual" fields. So in summary, my question is: How earth can I go about making a template/extension to porting this functionality? I have tried before with templates but always gave up after a day :/ If you could take the time to port the above I would greatly appreciate it.

    Read the article

  • SortList duplicated key, but it shouldn't

    - by Luca
    I have a class which implements IList interface. I requires a "sorted view" of this list, but without modifying it (I cannot sort directly the IList class). These view shall be updated when the original list is modified, keeping items sorted. So, I've introduced a SortList creation method which create a SortList which has a comparer for the specific object contained in the original list. Here is the snippet of code: public class MyList<T> : ICollection, IList<T> { ... public SortedList CreateSortView(string property) { try { Lock(); SortListView sortView; if (mSortListViews.ContainsKey(property) == false) { // Create sorted view sortView = new SortListView(property, Count); mSortListViews.Add(property, sortView); foreach (T item in Items) sortView.Add(item); } else sortView = mSortListViews[property]; sortView.ReferenceCount++; return (sortView); } finally { Unlock(); } } public void DeleteSortView(string property) { try { Lock(); // Unreference sorted view mSortListViews[property].ReferenceCount--; // Remove sorted view if (mSortListViews[property].ReferenceCount == 0) mSortListViews.Remove(property); } finally { Unlock(); } } protected class SortListView : SortedList { /// <summary> /// /// </summary> /// <param name="property"></param> /// <param name="capacity"></param> public SortListView(string property, int capacity) : base(new GenericPropertyComparer(typeof(T).GetProperty(property, BindingFlags.Instance | BindingFlags.Public)), capacity) { } /// <summary> /// Reference count. /// </summary> public int ReferenceCount = 0; /// <summary> /// /// </summary> /// <param name="item"></param> public void Add(T item) { Add(item, item); } /// <summary> /// /// </summary> /// <param name="item"></param> public void Remove(T item) { // Base implementation base.Remove(item); } /// <summary> /// Compare object on a generic property. /// </summary> class GenericPropertyComparer : IComparer { #region Constructors /// <summary> /// Construct a GenericPropertyComparer specifying the property to compare. /// </summary> /// <param name="property"> /// A <see cref="PropertyInfo"/> which specify the property to be compared. /// </param> /// <remarks> /// The <paramref name="property"/> parameter imply that the compared objects have the specified property. The property /// must be readable, and its type must implement the IComparable interface. /// </remarks> public GenericPropertyComparer(PropertyInfo property) { if (property == null) throw new ArgumentException("property doesn't specify a valid property"); if (property.CanRead == false) throw new ArgumentException("property specify a write-only property"); if (property.PropertyType.GetInterface("IComparable") == null) throw new ArgumentException("property type doesn't IComparable"); mSortingProperty = property; } #endregion #region IComparer Implementation public int Compare(object x, object y) { IComparable propX = (IComparable)mSortingProperty.GetValue(x, null); IComparable propY = (IComparable)mSortingProperty.GetValue(y, null); return (propX.CompareTo(propY)); } /// <summary> /// Sorting property. /// </summary> private PropertyInfo mSortingProperty = null; #endregion } } /// <summary> /// Sorted views of this ReactList. /// </summary> private Dictionary<string, SortListView> mSortListViews = new Dictionary<string, SortListView>(); } Practically, class users request to create a SortListView specifying the name of property which determine the sorting, and using the reflection each SortListView defined a IComparer which keep sorted the items. Whenever an item is added or removed from the original list, every created SortListView will be updated with the same operation. This seems good at first chance, but it creates me problems since it give me the following exception when adding items to the SortList: System.ArgumentException: Item has already been added. Key in dictionary: 'PowerShell_ISE [C:\Windows\sysWOW64\WindowsPowerShell\v1.0\PowerShell_ISE.exe]' Key being added: 'PowerShell_ISE [C:\Windows\system32\WindowsPowerShell\v1.0\PowerShell_ISE.exe]' As you can see from the exception message, thrown by SortedListView.Add(object), the string representation of the key (the list item object) is different (note the path of the executable). Why SortList give me that exception? To solve this I tried to implement a GetHashCode implementation for the underlying object, but without success: public override int GetHashCode() { return ( base.GetHashCode() ^ mApplicationName.GetHashCode() ^ mApplicationPath.GetHashCode() ^ mCommandLine.GetHashCode() ^ mWorkingDirectory.GetHashCode() ); }

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • How to fix “The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated.”

    - by ybbest
    Problem: When I try to publish a SharePoint2013 workflow, I received the error: The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated. After that, my workflow stopped working and every time I start a work I receive the following error message: System.ApplicationException: PreconditionFailed ---> System.ApplicationException: Error in the application. --- End of inner exception stack trace --- at System.Activities.Statements.Throw.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation) Analysis: After analysis, I found the error by visiting the http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc and the error I got on the message is                                                                                                                                              Solution: The solution is basically getting more memory to the server. For development environment, you can restart your noderunner.exe or some other services to release some memories. To verify you have enough memory    you can browse to http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc , it should return the information below. Then you can republish your workflow and it will work like a charm.

    Read the article

  • Try and catch error trapping. why is it so significant

    - by oded
    sorry if it sounds dumb ' as i am i a process of learnning java i try to understand , why on earth i should "try" a method to catch errors. it looks to me like the concept is:- ' letting a process run assuming its not properly developed. should i always assume that a program based on classes and inheritance is bound for unexpected errors, which i should handle with such a heavy tools like try catch throw throws ? should all java programs be included withing try catch framework? Thanks Oded.

    Read the article

  • Is this right in the use case of exec method of child_process? is there away to cody the envirorment along with the require module too?

    - by L2L2L
    I'm learning node. I am using child_process to move data to another script to be executed. But it seem that it does not copy the hold environment or I could be doing something wrong. To copy the hold environment --require modules too-- or is this when I use spawn, I'm not so clear or understanding spawn exec and execfile --although execfile is like what I'm doing at the bottom, but with exec... right?-- And I would just love to have some clarity on this matter. Please anyone? Thank you. parent.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var url; url= process.argv[1]; var dirname, locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); var flag, str; flag = "r", str = ""; fs.open(locate_r, flag, function opd(error, fd){ if (error){_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n");});})} var readBuff, buffOffset, buffLength, filePos; readBuff = new Buffer(15), buffOffset = 0, buffLength = readBuff.length, filePos = 0; fs.read(fd, readBuff, buffOffset, buffLength, filePos, function rd(error, readBytes){ error&&_err(error, fd); str = readBuff.toString("utf8"); process.env.str = str; process.stdout.write("str: "+ str + "\n" + "readBuff: " + readBuff + "\n"); fs.close(fd, function(){process.stdout.write( "Read and Closed File." + "\n" )}); //write(str); //run test for process.exec** var env, varName, envCopy, exec; env = process.env, varName, envCopy = {}, exec = require("child_process").exec; for(varName in env){ envCopy[varName] = env[varName]; } process.env.fs = fs, process.env.path = path, process.env.dirname = dirname, process.env.flag = flag, process.env.str = str, process.env._err = _err; process.env.fd = fd; exec("node child.js", env, function(error, stdout, stderr){ if(error){throw (new Error(error));} }); }); }); child.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var fd, fs, flag, path, dirname, str, _err; fd = process.env.fd, //fs = process.env.fs, //path = process.env.path, dirname = process.env.dirname, flag = process.env.flag, str = process.env.str, _err = process.env._err; var url; url= process.argv[1]; var locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); //function write(str){ var locate_a; locate_a = dirname + "/" + "test.json"; //path.join(dirname,"/", "test.json"); flag = "a"; fs.open(locate_a, flag, function opd(error, fd){ error&&_err(error, fs, fd); var writeBuff, buffPos, buffLgh, filePs; writeBuff = new Buffer(str), process.stdout.write( "writeBuff: " + writeBuff + "\n" + "str: " + str + "\n"), buffPos = 0, buffLgh = writeBuff.length, filePs = buffLgh;//null; fs.write(fd, writeBuff, buffPos, buffLgh, filePs-3, function(error, written){ error&&_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n"); }); }); fs.close(fd, function(){process.stdout.write( "Written and Closed File." + "\n");}); }); }); //} err.js - "use strict"; var fs; fs = require("fs"); module.exports = function _err(err, scp, cd){ try{ throw (new Error(err)); }catch(e){ process.stderr.write(e + "\n"); }finally{ cd; } }

    Read the article

  • How to remove CRUD operations from Entity Class

    - by GlutVonSmark
    Trying to get my head around removing dataStore access from my entity classes. Lets say I have an AccountsGroup entity class. I put the all DBAccess into AccountsGroupRepository class. Now should I have a DeleteFromDB method in the AccountsGroup class, that will call the repository? Public Sub DeleteFromDB dim repository as new AccountsGroupRepository(me) repository.DelteFromDB End Sub Or should I just always use repositry whenever I need to delete an entity, and not have the CRUD methods in the entity class? What happens when there is some business logic validation that needs to be done before the delete can proceed. For example if AccountsGroup still has some Accounts in it the delete method should throw an exception. Where do I put that?

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I wrote SQL SERVER – Challenge – Puzzle – Usage of FAST Hint and I did receive some good comments. Here is another question to tease your mind. Run following script and you will see that it will thrown an error. DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO The datatype of money is also visually look similar to the decimal, why it would throw following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Please leave a comment with explanation and I will post a your answer on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • AddThis - contains too many device filters error

    - by Yousef_Jadallah
    When using AddThis service with asp.net, some exceptions will throw like these: The string 'fb:like:layout' contains too many device filters. There can be only one. The string 'g:plusone:size' contains too many device filters. There can be only one. You can solve this by using "In line server code".   Step 1: Implement the following code in your code file:   Protected Function GetFacebookAttribute() As String Return String .Format( "{0}=" "{1}" ""...(read more)

    Read the article

  • DIY Super Mario “Kite” Lights Up the Sky [Video]

    - by Jason Fitzpatrick
    Throw some LEDs in helium balloons, string them together in a pixel-style grid, and you’ve got yourself a massive and glowing 8-bit sprite (in this case, a giant Super Mario). Read on to watch the video and see how you can build your own. Check out the video notes for more information on constructing it or, hit up the link below for more projects by Mark Rober. Mark Rober’s Project Blog [Make] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • How to fix “The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated.”

    - by ybbest
    Problem: When I try to publish a SharePoint2013 workflow, I received the error: The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated. After that, my workflow stopped working and every time I start a work I receive the following error message: System.ApplicationException: PreconditionFailed ---> System.ApplicationException: Error in the application. --- End of inner exception stack trace --- at System.Activities.Statements.Throw.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation) Analysis: After analysis, I found the error by visiting the http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc and the error I got on the message is                                                                                                                                              Solution: The solution is basically getting more memory to the server. For development environment, you can restart your noderunner.exe or some other services to release some memories. To verify you have enough memory    you can browse to http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc , it should return the information below. Then you can republish your workflow and it will work like a charm.

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Searching for the last logon of users in Active Directory

    - by Robert May
    I needed to clean out a bunch of old accounts at Veracity Solutions, and wanted to delete those that hadn’t used their account in more than a year. I found that AD has a property on objects called the lastLogonTimestamp.  However, this value isn’t exposed to you in any useful fashion.  Sure, you can pull up ADSI Edit and and eventually get to it there, but it’s painful. I spent some time searching, and discovered that there’s not much out there to help, so I thought a blog post showing exactly how to get at this information would be in order. Basically, what you end up doing is using System.DirectoryServices to search for accounts and then filtering those for users, doing some conversion and such to make it happen.  Basically, the end result of this is that you get a list of users with their logon information and you can then do with that what you will.  I turned my list into an observable collection and bound it into a XAML form. One important note, you need to add a reference to ActiveDs Type Library in the COM section of the world in references to get to LargeInteger. Here’s the class: namespace Veracity.Utilities { using System; using System.Collections.Generic; using System.DirectoryServices; using ActiveDs; using log4net; /// <summary> /// Finds users inside of the active directory system. /// </summary> public class UserFinder { /// <summary> /// Creates the default logger /// </summary> private static readonly ILog log = LogManager.GetLogger(typeof(UserFinder)); /// <summary> /// Finds last logon information /// </summary> /// <param name="domain">The domain to search.</param> /// <param name="userName">The username for the query.</param> /// <param name="password">The password for the query.</param> /// <returns>A list of users with their last logon information.</returns> public IList<UserLoginInformation> GetLastLogonInformation(string domain, string userName, string password) { IList<UserLoginInformation> result = new List<UserLoginInformation>(); DirectoryEntry entry = new DirectoryEntry(domain, userName, password, AuthenticationTypes.Secure); DirectorySearcher directorySearcher = new DirectorySearcher(entry); directorySearcher.PropertyNamesOnly = true; directorySearcher.PropertiesToLoad.Add("name"); directorySearcher.PropertiesToLoad.Add("lastLogonTimeStamp"); SearchResultCollection searchResults; try { searchResults = directorySearcher.FindAll(); } catch (System.Exception ex) { log.Error("Failed to do a find all.", ex); throw; } try { foreach (SearchResult searchResult in searchResults) { DirectoryEntry resultEntry = searchResult.GetDirectoryEntry(); if (resultEntry.SchemaClassName == "user") { UserLoginInformation logon = new UserLoginInformation(); logon.Name = resultEntry.Name; PropertyValueCollection timeStampObject = resultEntry.Properties["lastLogonTimeStamp"]; if (timeStampObject.Count > 0) { IADsLargeInteger logonTimeStamp = (IADsLargeInteger)timeStampObject[0]; long lastLogon = (long)((uint)logonTimeStamp.LowPart + (((long)logonTimeStamp.HighPart) << 32)); logon.LastLogonTime = DateTime.FromFileTime(lastLogon); } result.Add(logon); } } } catch (System.Exception ex) { log.Error("Failed to iterate search results.", ex); throw; } return result; } } } Some important things to note: Username and Password can be set to null and if your computer us part of the domain, this may still work. Domain should be set to something like LDAP://servername/CN=Users,CN=Domain,CN=com You’re actually getting a com object back, so that’s why the LongInteger conversions are happening.  The class for UserLoginInformation looks like this:   namespace Veracity.Utilities { using System; /// <summary> /// Represents user login information. /// </summary> public class UserLoginInformation { /// <summary> /// Gets or sets Name /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets LastLogonTime /// </summary> public DateTime LastLogonTime { get; set; } /// <summary> /// Gets the age of the account. /// </summary> public TimeSpan AccountAge { get { TimeSpan result = TimeSpan.Zero; if (this.LastLogonTime != DateTime.MinValue) { result = DateTime.Now.Subtract(this.LastLogonTime); } return result; } } } } I hope this is useful and instructive. Technorati Tags: Active Directory

    Read the article

  • SQL SERVER – Guest Post by Sandip Pani – SQL Server Statistics Name and Index Creation

    - by pinaldave
    Sometimes something very small or a common error which we observe in daily life teaches us new things. SQL Server Expert Sandip Pani (winner of Joes 2 Pros Contests) has come across similar experience. Sandip has written a guest post on an error he faced in his daily work. Sandip is working for QSI Healthcare as an Associate Technical Specialist and have more than 5 years of total experience. He blogs at SQLcommitted.com and contribute in various forums. His social media hands are LinkedIn, Facebook and Twitter. Once I faced following error when I was working on performance tuning project and attempt to create an Index. Mug 1913, Level 16, State 1, Line 1 The operation failed because an index or statistics with name ‘Ix_Table1_1′ already exists on table ‘Table1′. The immediate reaction to the error was that I might have created that index earlier and when I researched it further I found the same as the index was indeed created two times. This totally makes sense. This can happen due to many reasons for example if the user is careless and executes the same code two times as well, when he attempts to create index without checking if there was index already on the object. However when I paid attention to the details of the error, I realize that error message also talks about statistics along with the index. I got curious if the same would happen if I attempt to create indexes with the same name as statistics already created. There are a few other questions also prompted in my mind. I decided to do a small demonstration of the subject and build following demonstration script. The goal of my experiment is to find out the relation between statistics and the index. Statistics is one of the important input parameter for the optimizer during query optimization process. If the query is nontrivial then only optimizer uses statistics to perform a cost based optimization to select a plan. For accuracy and further learning I suggest to read MSDN. Now let’s find out the relationship between index and statistics. We will do the experiment in two parts. i) Creating Index ii) Creating Statistics We will be using the following T-SQL script for our example. IF (OBJECT_ID('Table1') IS NOT NULL) DROP TABLE Table1 GO CREATE TABLE Table1 (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO We will be using following two queries to check if there are any index or statistics on our sample table Table1. -- Details of Index SELECT OBJECT_NAME(OBJECT_ID) AS TableName, Name AS IndexName, type_desc FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO -- Details of Statistics SELECT OBJECT_NAME(OBJECT_ID) TableName, Name AS StatisticsName FROM sys.stats WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO When I ran above two scripts on the table right after it was created it did not give us any result which was expected. Now let us begin our test. 1) Create an index on the table Create following index on the table. CREATE NONCLUSTERED INDEX Ix_Table1_1 ON Table1(Col1) GO Now let us use above two scripts and see their results. We can see that when we created index at the same time it created statistics also with the same name. Before continuing to next set of demo – drop the table using following script and re-create the table using a script provided at the beginning of the table. DROP TABLE table1 GO 2) Create a statistic on the table Create following statistics on the table. CREATE STATISTICS Ix_table1_1 ON Table1 (Col1) GO Now let us use above two scripts and see their results. We can see that when we created statistics Index is not created. The behavior of this experiment is different from the earlier experiment. Clean up the table setup using the following script: DROP TABLE table1 GO Above two experiments teach us very valuable lesson that when we create indexes, SQL Server generates the index and statistics (with the same name as the index name) together. Now due to the reason if we have already had statistics with the same name but not the index, it is quite possible that we will face the error to create the index even though there is no index with the same name. A Quick Check To validate that if we create statistics first and then index after that with the same name, it will throw an error let us run following script in SSMS. Make sure to drop the table and clean up our sample table at the end of the experiment. -- Create sample table CREATE TABLE TestTable (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO -- Create Statistics CREATE STATISTICS IX_TestTable_1 ON TestTable (Col1) GO -- Create Index CREATE NONCLUSTERED INDEX IX_TestTable_1 ON TestTable(Col1) GO -- Check error /*Msg 1913, Level 16, State 1, Line 2 The operation failed because an index or statistics with name 'IX_TestTable_1' already exists on table 'TestTable'. */ -- Clean up DROP TABLE TestTable GO While creating index it will throw the following error as statistics with the same name is already created. In simple words – when we create index the name of the index should be different from any of the existing indexes and statistics. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >