Search Results

Search found 3371 results on 135 pages for 'compare'.

Page 121/135 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Two Values Enter, One Value Leaves

    - by Bunch
    This is a fairly easy way to compare values for two different controls. In this example a user needs to enter in a street address and zip code OR pick a county. After that the application will display location(s) based on the value. The application only wants a specific street/zip combination or a county, not both. This code shows how to check for that on an ASP.Net page using some JavaScript. The control code: <table>     <tr>         <td>             <label style="color: Red;">Required Fields</label>         </td>         <td style="width: 300px;">             <label style="color: Red; font-weight: bold;" id="reqAlert" ></label>         </td>     </tr>     <tr>         <td>             <asp:Label ID="Label3" runat="server" Text="Street Address"></asp:Label>         </td>         <td style="width: 200px;">             <input id="Street" type="text" style="width: 200px;" />         </td>     </tr>      <tr>         <td>             <asp:Label ID="Label5" runat="server" Text="Zip Code"></asp:Label>             &nbsp;         </td>         <td style="width: 200px;">             <input id="Zip" type="text" style="width: 200px;"/>         </td>     </tr>     <tr>         <td>             <label style="color: Red; font-size: large;">-- OR --</label>         </td>     </tr>     <tr>         <td>             <asp:Label ID="Label2" runat="server" Text="County"></asp:Label>         </td>         <td style="width: 200px;">             <asp:DropDownList ID="ddlCounty" runat="server">                 <asp:ListItem Value="0" Text="" />                 <asp:ListItem Value="1" Text="County A" />                 <asp:ListItem Value="2" Text="County B" />                 <asp:ListItem Value="3" Text="County C" />                                </asp:DropDownList>         </td>     </tr> </table> <input id="btnMapSearch" type="button" value="Search" onclick="requiredVal()" class="actionButton" /> The onclick for the button runs the requiredVal javascript function. That is where the checks take place. If only one item (street/zip or county) has been entered the application will carry on with it’s locateAddr function; otherwise it will show an error message in the label reqAlert. The javascript: function requiredVal() {     var street = document.getElementById("Street").value;     var zip = document.getElementById("Zip").value;     var countyDdl = document.getElementById("ctl00_Content_ddlCounty");     var county = countyDdl.options[countyDdl.selectedIndex].text;     var reqAlert = document.getElementById("reqAlert");     reqAlert.innerHTML = '';   //clears out any previous messages     if (street != '' || zip != '') {         if (county != '') {             reqAlert.innerHTML = 'Please select only one required option';  //values for both were entered         }         else {             locateAddr();         }     }     else if (street == '' && zip == '' && county == '') {         reqAlert.innerHTML = 'Please select a required option';  //no values entered     }     else {         locateAddr();     } } Technorati Tags: ASP.Net,JavaScript

    Read the article

  • Retrieving recent tweets using LINQ

    - by brian_ritchie
    There are a few different APIs for accessing Twitter from .NET.  In this example, I'll use linq2twitter.  Other APIs can be found on Twitter's development site. First off, we'll use the LINQ provider to pull in the recent tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static Status[] GetLatestTweets(string screenName, int numTweets) 2: { 3: try 4: { 5: var twitterCtx = new LinqToTwitter.TwitterContext(); 6: var list = from tweet in twitterCtx.Status 7: where tweet.Type == StatusType.User && 8: tweet.ScreenName == screenName 9: orderby tweet.CreatedAt descending 10: select tweet; 11: // using Take() on array because it was failing against the provider 12: var recentTweets = list.ToArray().Take(numTweets).ToArray(); 13: return recentTweets; 14: } 15: catch 16: { 17: return new Status[0]; 18: } 19: } Once they have been retrieved, they would be placed inside an MVC model. Next, the tweets need to be formatted for display. I've defined an extension method to aid with date formatting: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static class DateTimeExtension 2: { 3: public static string ToAgo(this DateTime date2) 4: { 5: DateTime date1 = DateTime.Now; 6: if (DateTime.Compare(date1, date2) >= 0) 7: { 8: TimeSpan ts = date1.Subtract(date2); 9: if (ts.TotalDays >= 1) 10: return string.Format("{0} days", (int)ts.TotalDays); 11: else if (ts.Hours > 2) 12: return string.Format("{0} hours", ts.Hours); 13: else if (ts.Hours > 0) 14: return string.Format("{0} hours, {1} minutes", 15: ts.Hours, ts.Minutes); 16: else if (ts.Minutes > 5) 17: return string.Format("{0} minutes", ts.Minutes); 18: else if (ts.Minutes > 0) 19: return string.Format("{0} mintutes, {1} seconds", 20: ts.Minutes, ts.Seconds); 21: else 22: return string.Format("{0} seconds", ts.Seconds); 23: } 24: else 25: return "Not valid"; 26: } 27: } Finally, here is the piece of the view used to render the tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <ul class="tweets"> 2: <% 3: foreach (var tweet in Model.Tweets) 4: { 5: %> 6: <li class="tweets"> 7: <span class="tweetTime"><%=tweet.CreatedAt.ToAgo() %> ago</span>: 8: <%=tweet.Text%> 9: </li> 10: <%} %> 11: </ul>  

    Read the article

  • Using DisplayTag library, I want to have the currently selected row have a unique custom class using

    - by Mary
    I have been trying to figure out how to highlight the selected row in a table. In my jsp I have jsp scriplet that can get access to the id of the row the displaytag library is creating. I want to compare it to the the id of the current row selected by the user ${currentNoteId}. Right now if the row id = 849 (hardcoded) the class "currentClass" is added to just that row of the table. I need to change the 849 for the {$currentNoteId} and I don't know how to do it. I am using java, Spring MVC. The jsp: ... <% request.setAttribute("dyndecorator", new org.displaytag.decorator.TableDecorator() { public String addRowClass() { edu.ilstu.ais.advisorApps.business.Note row = (edu.ilstu.ais.advisorApps.business.Note)getCurrentRowObject(); String rowId = row.getId(); if ( rowId.equals("849") ) { return "currentClass"; } return null; } }); %> <c:set var="currentNoteId" value="${studentNotes.currentNote.id}"/> ... <display:table id="noteTable" name="${ studentNotes.studentList }" pagesize="20" requestURI="notesView.form.html" decorator="dyndecorator"> <display:column title="Select" class="yui-button-match" href="/notesView.form.html" paramId="note.id" paramProperty="id"> <input type="button" class="yui-button-match2" name="select" value="Select"/> </display:column> <display:column property="userName" title="Created By" sortable="true"/> <display:column property="createDate" title="Created On" sortable="true" format="{0,date,MM/dd/yy hh:mm:ss a}"/> <display:column property="detail" title="Detail" sortable="true"/> </display:table> ... This could also get done using javascript and that might be best, but the documentation suggested this so I thought I would try it. I cannot find an example anywhere using the addRowClass() unless the comparison is to a field already in the row (a dollar amount is used in the documentation example) or hardcoded in like the "849" id. Thanks for any help you can provide.

    Read the article

  • Sorting Android ListView

    - by aeoth
    I'm only starting with Android dev, and while the Milestone is a nice device Java is not my natural language and I'm struggling with both the Google docs on Android SDK, Eclipse and Java itself. Anyway... I'm writing a microblog client for Android to go along with my Windows client (MahTweets). At the moment I've got Tweets coming and going without fail, the problem is with the UI. The initial call will order items correctly (as in highest to lowest) 3 2 1 When a refresh is made 3 2 1 6 5 4 After the tweets are processed, I'm calling adapter.notifyDataSetChanged(); Initially I thought that getItem() on the Adapter needed to be sorted (and the code below is what I ended up with), but I'm still not having any luck. public class TweetAdapter extends BaseAdapter { private List<IStatusUpdate> elements; private Context c; public TweetAdapter(Context c, List<IStatusUpdate> Tweets) { this.elements = Tweets; this.c = c; } public int getCount() { return elements.size(); } public Object getItem(int position) { Collections.sort(elements, new IStatusUpdateComparator()); return elements.get(position); } public long getItemId(int id) { return id; } public void Remove(int id) { notifyDataSetChanged(); } public View getView(int position, View convertView, ViewGroup parent) { RelativeLayout rowLayout; IStatusUpdate t = elements.get(position); rowLayout = t.GetParent().GetUI(t, parent, c); return rowLayout; } class IStatusUpdateComparator implements Comparator { public int compare(Object obj1, Object obj2) { IStatusUpdate update1 = (IStatusUpdate)obj1; IStatusUpdate update2 = (IStatusUpdate)obj2; int result = update1.getID().compareTo(update2.getID()); if (result == -1) return 1; else if (result == 1) return 0; return result; } } } Is there a better way to go about sorting ListViews in Android, while still being able to use the LayoutInflater? (rowLayout = t.GetParent().GetUI(t, parent, c) expands the UI to the specific view which the microblog implementation can provide)

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Postgraduate

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involve the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image seamless operations and transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For postgrad studies in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For postgrad studies, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your postgrad work, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your university research group? Thanks in advance.

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

  • Exception using SQLiteDataReader

    - by galford13x
    I'm making a Custom SQLite Wrapper. This is meant to allow a presistent connection to a database. However, I receive an exception when calling this function twice. public Boolean DatabaseConnected(string databasePath) { bool exists = false; if (ConnectionOpen()) { this.Command.CommandText = string.Format(DATABASE_QUERY); using (reader = this.Command.ExecuteReader()) { while (reader.Read()) { if (string.Compare(reader[FILE_NAME_COL_HEADER].ToString(), databasePath, true) == 0) { exists = true; break; } } reader.Close(); } } return exists; } I use the above function to check if the database is currently open before executing a command or trying to open a database. The first time I execute the function, it executes with no issue. After that the reader = this.Command.ExecuteReader() throws an exception Object reference not set to an instance of an object. StackTrace: at System.Data.SQLite.SQLiteStatement.Dispose() at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt) at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader() at EveTraderApi.Database.SQLDatabase.DatabaseConnected(String databasePath) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 579 at EveTraderApi.Database.SQLDatabase.OpenSQLiteDB(String filename) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 119 at EveTraderApiExample.Form1.CreateTableDataTypes() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 89 at EveTraderApiExample.Form1.Button1_ExecuteCommand(Object sender, EventArgs e) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 35 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at EveTraderApiExample.Program.Main() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in

    - by Steve Johnson
    hi all, hope that everybody here is OK. We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio. I am trying to automate build for Web Application built using C# and VB.Net. Our scenario is such that we have a central database to which our web application connects. Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds. The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server. I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two. Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed. If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix. Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008. Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application. Also default data can't be generated by this process. To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client. Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases? Thank you all. Regards Steve

    Read the article

  • Page expired issue with back button and wicket SortableDataProvider and DataTable

    - by David
    Hi, I've got an issue with SortableDataProvider and DataTable in wicket. I've defined my DataTable as such: IColumn<Column>[] columns = new IColumn[9]; //column values are mapped to the private attributes listed in ColumnImpl.java columns[0] = new PropertyColumn(new Model("#"), "columnPosition", "columnPosition"); columns[1] = new PropertyColumn(new Model("Description"), "description"); columns[2] = new PropertyColumn(new Model("Type"), "dataType", "dataType"); Adding it to the table: DataTable<Column> dataTable = new DataTable<Column>("columnsTable", columns, provider, maxRowsPerPage) { @Override protected Item<Column> newRowItem(String id, int index, IModel<Column> model) { return new OddEvenItem<Column>(id, index, model); } }; My data provider: public class ColumnSortableDataProvider extends SortableDataProvider<Column> { private static final long serialVersionUID = 1L; private List list = null; public ColumnSortableDataProvider(Table table, String sortProperty) { this.list = Arrays.asList(table.getColumns().toArray(new Column[0])); setSort(sortProperty, true); } public ColumnSortableDataProvider(List list, String sortProperty) { this.list = list; setSort(sortProperty, true); } @Override public Iterator iterator(int first, int count) { /* first - first row of data count - minimum number of elements to retrieve So this method returns an iterator capable of iterating over {first, first+count} items */ Iterator iterator = null; try { if(getSort() != null) { Collections.sort(list, new Comparator() { private static final long serialVersionUID = 1L; @Override public int compare(Column c1, Column c2) { int result=1; PropertyModel<Comparable> model1= new PropertyModel<Comparable>(c1, getSort().getProperty()); PropertyModel<Comparable> model2= new PropertyModel<Comparable>(c2, getSort().getProperty()); if(model1.getObject() == null && model2.getObject() == null) result = 0; else if(model1.getObject() == null) result = 1; else if(model2.getObject() == null) result = -1; else result = ((Comparable)model1.getObject()).compareTo(model2.getObject()); result = getSort().isAscending() ? result : -result; return result; } }); } if (list.size() (first+count)) iterator = list.subList(first, first+count).iterator(); else iterator = list.iterator(); } catch (Exception e) { e.printStackTrace(); } return iterator; } The problem is the following: - I click a column header to sort by that column. - I navigate to a different page - I click Back (or Forward if I do the opposite scenario) - Page has expired. It'd be nice to generate the page using PageParameters but I somehow need to intercept the sort event to do so. Any pointers would be greatly appreciated. Thanks a ton!! David

    Read the article

  • PHP: Can pcntl_alarm() and socket_select() peacefully exist in the same thread?

    - by DWilliams
    I have a PHP CLI script mostly written that functions as a chat server for chat clients to connect to (don't ask me why I'm doing it in PHP, thats another story haha). My script utilizes the socket_select() function to hang execution until something happens on a socket, at which point it wakes up, processes the event, and waits until the next event. Now, there are some routine tasks that I need performed every 30 seconds or so (check of tempbanned users should be unbanned, save user databases, other assorted things). From what I can tell, PHP doesn't have very great multi-threading support at all. My first thought was to compare a timestamp every time the socket generates an event and gets the program flowing again, but this is very inconsistent since the server could very well sit idle for hours and not have any of my cleanup routines executed. I came across the PHP pcntl extensions, and it lets me use assign a time interval for SIGALRM to get sent and a function get executed every time it's sent. This seems like the ideal solution to my problem, however pcntl_alarm() and socket_select() clash with each other pretty bad. Every time SIGALRM is triggered, all sorts of crazy things happen to my socket control code. My program is fairly lengthy so I can't post it all here, but it shouldn't matter since I don't believe I'm doing anything wrong code-wise. My question is: Is there any way for a SIGALRM to be handled in the same thread as a waiting socket_select()? If so, how? If not, what are my alternatives here? Here's some output from my program. My alarm function simply outputs "Tick!" whenever it's called to make it easy to tell when stuff is happening. This is the output (including errors) after allowing it to tick 4 times (there were no actual attempts at connecting to the server despite what it says): [05-28-10 @ 20:01:05] Chat server started on 192.168.1.28 port 4050 [05-28-10 @ 20:01:05] Loaded 2 users from file PHP Notice: Undefined offset: 0 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:15] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:25] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:25] Accepting socket connection from PHP Notice: Undefined offset: 1 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:35] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:45] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:45] Accepting socket connection from PHP Notice: Undefined offset: 2 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Asp.net MVC VirtualPathProvider views parse error

    - by madcapnmckay
    Hi, I am working on a plugin system for Asp.net MVC 2. I have a dll containing controllers and views as embedded resources. I scan the plugin dlls for controller using StructureMap and I then can pull them out and instantiate them when requested. This works fine. I then have a VirtualPathProvider which I adapted from this post public class AssemblyResourceProvider : VirtualPathProvider { protected virtual string WidgetDirectory { get { return "~/bin"; } } private bool IsAppResourcePath(string virtualPath) { var checkPath = VirtualPathUtility.ToAppRelative(virtualPath); return checkPath.StartsWith(WidgetDirectory, StringComparison.InvariantCultureIgnoreCase); } public override bool FileExists(string virtualPath) { return (IsAppResourcePath(virtualPath) || base.FileExists(virtualPath)); } public override VirtualFile GetFile(string virtualPath) { return IsAppResourcePath(virtualPath) ? new AssemblyResourceVirtualFile(virtualPath) : base.GetFile(virtualPath); } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return IsAppResourcePath(virtualPath) ? null : base.GetCacheDependency(virtualPath, virtualPathDependencies, utcStart); } } internal class AssemblyResourceVirtualFile : VirtualFile { private readonly string path; public AssemblyResourceVirtualFile(string virtualPath) : base(virtualPath) { path = VirtualPathUtility.ToAppRelative(virtualPath); } public override Stream Open() { var parts = path.Split('/'); var resourceName = Path.GetFileName(path); var apath = HttpContext.Current.Server.MapPath(Path.GetDirectoryName(path)); var assembly = Assembly.LoadFile(apath); return assembly != null ? assembly.GetManifestResourceStream(assembly.GetManifestResourceNames().SingleOrDefault(s => string.Compare(s, resourceName, true) == 0)) : null; } } The VPP seems to be working fine also. The view is found and is pulled out into a stream. I then receive a parse error Could not load type 'System.Web.Mvc.ViewUserControl<dynamic>'. which I can't find mentioned in any previous example of pluggable views. Why would my view not compile at this stage? Thanks for any help, Ian EDIT: Getting closer to an answer but not quite clear why things aren't compiling. Based on the comments I checked the versions and everything is in V2, I believe dynamic was brought in at V2 so this is fine. I don't even have V3 installed so it can't be that. I have however got the view to render, if I remove the <dynamic> altogether. So a VPP works but only if the view is not strongly typed or dynamic This makes sense for the strongly typed scenario as the type is in the dynamically loaded dll so the viewengine will not be aware of it, even though the dll is in the bin. Is there a way to load types at app start? Considering having a go with MEF instead of my bespoke Structuremap solution. What do you think?

    Read the article

  • Moving from Linear Probing to Quadratic Probing (hash collisons)

    - by Nazgulled
    Hi, My current implementation of an Hash Table is using Linear Probing and now I want to move to Quadratic Probing (and later to chaining and maybe double hashing too). I've read a few articles, tutorials, wikipedia, etc... But I still don't know exactly what I should do. Linear Probing, basically, has a step of 1 and that's easy to do. When searching, inserting or removing an element from the Hash Table, I need to calculate an hash and for that I do this: index = hash_function(key) % table_size; Then, while searching, inserting or removing I loop through the table until I find a free bucket, like this: do { if(/* CHECK IF IT'S THE ELEMENT WE WANT */) { // FOUND ELEMENT return; } else { index = (index + 1) % table_size; } while(/* LOOP UNTIL IT'S NECESSARY */); As for Quadratic Probing, I think what I need to do is change how the "index" step size is calculated but that's what I don't understand how I should do it. I've seen various pieces of code, and all of them are somewhat different. Also, I've seen some implementations of Quadratic Probing where the hash function is changed to accommodated that (but not all of them). Is that change really needed or can I avoid modifying the hash function and still use Quadratic Probing? EDIT: After reading everything pointed out by Eli Bendersky below I think I got the general idea. Here's part of the code at http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx: 15 for ( step = 1; table->table[h] != EMPTY; step++ ) { 16 if ( compare ( key, table->table[h] ) == 0 ) 17 return 1; 18 19 /* Move forward by quadratically, wrap if necessary */ 20 h = ( h + ( step * step - step ) / 2 ) % table->size; 21 } There's 2 things I don't get... They say that quadratic probing is usually done using c(i)=i^2. However, in the code above, it's doing something more like c(i)=(i^2-i)/2 I was ready to implement this on my code but I would simply do: index = (index + (index^index)) % table_size; ...and not: index = (index + (index^index - index)/2) % table_size; If anything, I would do: index = (index + (index^index)/2) % table_size; ...cause I've seen other code examples diving by two. Although I don't understand why... 1) Why is it subtracting the step? 2) Why is it diving it by 2?

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • How to pass the 'argument-line' of one PowerShell function to another?

    - by jwfearn
    I'm trying to write some PowerShell functions that do some stuff and then transparently call through to existing built-in function. I want to pass along all the arguments untouched. I don't want to have to know any details of the arguments. I tired using 'splat' to do this with @args but that didn't work as I expected. In the example below, I've written a toy function called myls which supposed to print hello! and then call the same built-in function, Get-ChildItem, that the built-in alias ls calls with the rest of the argument line intact. What I have so far works pretty well: function myls { Write-Output "hello!" Invoke-Expression("Get-ChildItem "+$MyInvocation.UnboundArguments -join " ") } A correct version of myls should be able to handle being called with no arguments, with one argument, with named arguments, from a line containing multiple semi-colon delimited commands, and with variables in the arguments including string variables containing spaces. The tests below compare myls and the builtin ls: [NOTE: output elided and/or compacted to save space] PS> md C:\p\d\x, C:\p\d\y, C:\p\d\"jay z" PS> cd C:\p\d PS> ls # no args PS> myls # pass PS> cd .. PS> ls d # one arg PS> myls d # pass PS> $a="A"; $z="Z"; $y="y"; $jz="jay z" PS> $a; ls d; $z # multiple statements PS> $a; myls d; $z # pass PS> $a; ls d -Exclude x; $z # named args PS> $a; myls d -Exclude x; $z # pass PS> $a; ls d -Exclude $y; $z # variables in arg-line PS> $a; myls d -Exclude $y; $z # pass PS> $a; ls d -Exclude $jz; $z # variables containing spaces in arg-line PS> $a; myls d -Exclude $jz; $z # FAIL! Is there a way I can re-write myls to get the behavior I want?

    Read the article

  • Why touching "d_name" makes calls to readdir() fail?

    - by Sarah Mani
    Hi, I'm trying to write a little helper for Windows which eventually will accept a file extension as an argument and return the number of files of that kind in the current directory. To do so, I'm reading the file entries in the directories and after getting the extension I'd like to convert it to lowercase to compare it with the yet-to-add specified argument. When converting the extension to lowercase I found that touching even a duplicate string of the d_name variable will cause a strange behaviour, like no more calls to readdir are called. Here is the code I'm using right now (the commented code is preliminary) and outputs for a given directory: #include <ctype.h> #include <dirent.h> #include <stdio.h> #include <string.h> char * strrch(char *string, size_t elements, char character) { char *reverse = string + elements; while (--reverse != string) if (*reverse == character) return reverse; return NULL; } void test(char *string) { // Even being a duplicate will make it fail: char *str = strdup(string); printf("Strings: %s %s\n", string, str); *str = 'a'; printf("Strings: %s %s\n", string, str); //unsigned short int i = 0; //for (; str[i] != '\0', str++; i++) // str[i] = tolower((unsigned char) str[i]); //puts(str); } int main(int argc, char **argv) { DIR *directory; struct dirent *element; if (directory = opendir(".")) { while (element = readdir(directory)) test(strrch(element->d_name, element->d_namlen, '.')); closedir(directory); puts(NULL); } else puts("Couldn't open the directory.\n"); } Output without modifying the duplicate (modification and the second printf call commented): Strings: (null) (null) Strings: . . Strings: .exe .exe Strings: .pdf .pdf Strings: .c .c Strings: .ini .ini Strings: .pdf .pdf Strings: .pdf .pdf Strings: .pdf .pdf Strings: .flac .flac Strings: .FLAC .FLAC Strings: .lnk .lnk Strings: .URL .URL Output of the same directory (with the code above, with the 2 printfs): Strings: (null) (null) Is there anything wrong? Is it a compiler issue? I'm using GCC 4.4.3 in Windows (MinGW) right now. Thank you very much for your help. By the way, is there any other way to work with files and directories in a Windows environment not using the POSIX functions?

    Read the article

  • php if clause inside foreach not retrieving data correctly

    - by Mike
    Here's my issue: In my controller, I want to grab user input from a form. I then parse the input, and compare it to database values to ensure I'm grabbing the correct input. I simply want to match the user's answers to the question, grab the user ID, the question ID, and then determine if the answer applies to a multiple choice or checkbox question, or something else. I take those values and insert them into the answer table. Ignore the waiver stuff. I'll test that once I get the answers input correctly. // add answers and waiver consent records try { $answerArray = array(); $waiverArray = array(); // retrieve answers, waiver consents, and the question ID's from form object foreach ($formData as $key => $value) { $parts = explode("_", $key); if ($parts[0] == 'question') { array_push($answerArray, $value); } if ($parts[0] == 'waiverTitle') { array_push($waiverArray, $value); } } $questions = new Model_DbTable_Questions(); $questionResults = $questions->getQuestionResults($session->idEvent); foreach ( $questionResults as $qr ) { if ($qr ['questionType'] == 'multipleChoice' || $qr ['questionType'] == 'checkBox') { foreach ( $answerArray as $aa ) { $answerData = $answers->addAnswer ( $lastUserID, $qr ['idQuestion'], null, $aa ); echo count ( $answerData ) . ', ' . $qr ['questionType'] . ', ' . $aa . '<br />'; } } else { foreach ( $answerArray as $aa ) { $answerData = $answers->addAnswer ( $lastUserID, $qr ['idQuestion'], $aa, null ); echo count ( $answerData ) . ', ' . $qr ['questionType'] . ', ' . $aa . '<br />'; } } } } catch (Zend_Db_Statement_Exception $e) { $e->getMessage(); throw $e; } From my test data, I expect to get 2 records that match the multiple choice and checkbox criteria, and 1 record for text in the ELSE clause like this: 3, checkbox, 1 3, multipleChoice, 1 3, text, question_2 What I get is a 3x3 Cartesian product, 3 question elements each with the 3 possible answers like this output from the echo statements: 4, checkBox, 1 4, checkBox, 1 4, checkBox, question_2 4, multipleChoice, 1 4, multipleChoice, 1 4, multipleChoice, question_2 4, text, 1 4, text, 1 4, text, question_2 I've tried placing the IF clause inside the inner foreach, but I get the same results. I've been staring at this problem for way too long and cannot see what I'm doing wrong. Your kind assistance would be greatly appreciated. Please let me know if my request requires more clarification.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • T-SQL - Left Outer Joins - Filters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4) EDIT: A bit more context - depending on where I put the filter I different results. As stated above the where clause gives me the correct result in one state and the ON in the other. I am looking for a consistent way of doing this. Where - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number AND table1.groupby = 2 --****************************** WHERE table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 1 1 NULL 2 2 NULL 1 2 NULL Where (table 2 this time) - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number --****************************** WHERE table1.number IS NULL AND table1.groupby = 2 Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- (0) rows returned

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >