Search Results

Search found 1022 results on 41 pages for 'pseudo'.

Page 38/41 | < Previous Page | 34 35 36 37 38 39 40 41  | Next Page >

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • Multi-threaded Pooled Allocators

    - by Darren Engwirda
    I'm having some issues using pooled memory allocators for std::list objects in a multi-threaded application. The part of the code I'm concerned with runs each thread function in isolation (i.e. there is no communication or synchronization between threads) and therefore I'd like to setup separate memory pools for each thread, where each pool is not thread-safe (and hence fast). I've tried using a shared thread-safe singleton memory pool and found the performance to be poor, as expected. This is a heavily simplified version of the type of thing I'm trying to do. A lot has been included in a pseudo-code kind of way, sorry if it's confusing. /* The thread functor - one instance of MAKE_QUADTREE created for each thread */ class make_quadtree { private: /* A non-thread-safe memory pool for int linked list items, let's say that it's * something along the lines of BOOST::OBJECT_POOL */ pooled_allocator<int> item_pool; /* The problem! - a local class that would be constructed within each std::list as the * allocator but really just delegates to ITEM_POOL */ class local_alloc { public : //!! I understand that I can't access ITEM_POOL from within a nested class like //!! this, that's really my question - can I get something along these lines to //!! work?? pointer allocate (size_t n) { return ( item_pool.allocate(n) ); } }; public : make_quadtree (): item_pool() // only construct 1 instance of ITEM_POOL per // MAKE_QUADTREE object { /* The kind of data structures - vectors of linked lists * The idea is that all of the linked lists should share a local pooled allocator */ std::vector<std::list<int, local_alloc>> lists; /* The actual operations - too complicated to show, but in general: * * - The vector LISTS is grown as a quadtree is built, it's size is the number of * quadtree "boxes" * * - Each element of LISTS (each linked list) represents the ID's of items * contained within each quadtree box (say they're xy points), as the quadtree * is grown a lot of ID pop/push-ing between lists occurs, hence the memory pool * is important for performance */ } }; So really my problem is that I'd like to have one memory pool instance per thread functor instance, but within each thread functor share the pool between multiple std::list objects.

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state without waiting half a day?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Extending AdapterView

    - by Ander Webbs
    Hi, i'm trying to make (for learning purposes) my own implementation of a simple AdapterView where items comes from an basic Adapter (ImageAdapter from sdk samples). Actual code is like this: public class MyAdapterView extends AdapterView<ImageAdapter> implements AdapterView.OnItemClickListener{ private ImageAdapter mAdapter; public MyAdapterView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); initThings(); } private void initThings(){ setOnItemClickListener(this); } @Override public ImageAdapter getAdapter() { // TODO Auto-generated method stub return mAdapter; } @Override public void setAdapter(ImageAdapter adapter) { // TODO Auto-generated method stub mAdapter=adapter; requestLayout(); } View obtainView(int position) { View child = mAdapter.getView(position, null, this); return child; } @Override protected void onLayout(boolean changed, int l, int t, int r, int b) { super.onLayout(changed, l, t, r, b); for(int i=0;i<mAdapter.getCount();i++){ View child = obtainView(i); child.layout(10, 70*i, 70, 70); addViewInLayout(child, i, null, true); } this.invalidate(); } @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { Log.d("MYEXAMPLES","Clicked an item!"); } } This isn't a coding masterpiece, it just displays a pseudo-listview with pictures. I know i could've used ListView, GridView, Spinner, etc. but i'm relative new to android and i'm trying to figure out some things on it. Well, the question here is: Why is my onItemClick not firing? Using the same ImageAdapter with a GridView, everything works ok, but when i use with above class, i get nothing. Inside AdapterView.java there is code for those click, longclick, etc events... so why can't i just fire them? Maybe i'm misunderstanding basic things on how AdapterView works? Should I extend other base classes instead? And why? Hoping to find more experienced guidance on here, thanks in advance.

    Read the article

  • Efficient algorithm to distribute work?

    - by Zwei Steinen
    It's a bit complicated to explain but here we go. We have problems like this (code is pseudo-code, and is only for illustrating the problem. Sorry it's in java. If you don't understand, I'd be glad to explain.). class Problem { final Set<Integer> allSectionIds = { 1,2,4,6,7,8,10 }; final Data data = //Some data } And a subproblem is: class SubProblem { final Set<Integer> targetedSectionIds; final Data data; SubProblem(Set<Integer> targetedSectionsIds, Data data){ this.targetedSectionIds = targetedSectionIds; this.data = data; } } Work will look like this, then. class Work implements Runnable { final Set<Section> subSections; final Data data; final Result result; Work(Set<Section> subSections, Data data) { this.sections = SubSections; this.data = data; } @Override public void run(){ for(Section section : subSections){ result.addUp(compute(data, section)); } } } Now we have instances of 'Worker', that have their own state sections I have. class Worker implements ExecutorService { final Map<Integer,Section> sectionsIHave; { sectionsIHave = {1:section1, 5:section5, 8:section8 }; } final ExecutorService executor = //some executor. @Override public void execute(SubProblem problem){ Set<Section> sectionsNeeded = fetchSections(problem.targetedSectionIds); super.execute(new Work(sectionsNeeded, problem.data); } } phew. So, we have a lot of Problems and Workers are constantly asking for more SubProblems. My task is to break up Problems into SubProblem and give it to them. The difficulty is however, that I have to later collect all the results for the SubProblems and merge (reduce) them into a Result for the whole Problem. This is however, costly, so I want to give the workers "chunks" that are as big as possible (has as many targetedSections as possible). It doesn't have to be perfect (mathematically as efficient as possible or something). I mean, I guess that it is impossible to have a perfect solution, because you can't predict how long each computation will take, etc.. But is there a good heuristic solution for this? Or maybe some resources I can read up before I go into designing? Any advice is highly appreciated!

    Read the article

  • How do I pass the value of the previous form element into an "onchange" javascript function?

    - by Jen
    Hello, I want to make some UI improvements to a page I am developing. Specifically, I need to add another drop down menu to allow the user to filter results. This is my current code: HTML file: <select name="test_id" onchange="showGrid(this.name, this.value, 'gettestgrid')"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> This is pseudo code for what I want to happen: <select name="test_id"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> <select name="statistics" onchange="showGrid(PREVIOUS.name, PREVIOUS.VALUE, THIS.value)"> <option selected>Select a data display --></option> <option value='gettestgrid'>Show averages by student</option> <option value='gethomeroomgrid'>Show averages by homeroom</option> <option value='getschoolgrid'>Show averages by school</option> </select> How do I access the previous field's name and value? Any help much appreciated, thx! Also, JS function for reference: function showGrid(name, value, phpfile) { xmlhttp=GetXmlHttpObject(); if (xmlhttp==null) { alert ("Browser does not support HTTP Request"); return; } var url=phpfile+".php"; url=url+"?"+name+"="+value; url=url+"&sid="+Math.random(); xmlhttp.onreadystatechange=stateChanged; xmlhttp.open("GET",url,true); xmlhttp.send(null); }

    Read the article

  • How to combine a list of choices to determine which select statement

    - by Larry
    I have a mysql db and am using php 5.2 What I am trying to do is offer a list of options for a person to select (only 1). The chosen option will cause a select, update, or delete statement to be ran. The results of the statement do not need to be shown, although, showing the old and then the new would be nice (no problems with that part tho'.). Pseudo-Code: Assign $choice = 0 Check the value of $choice // This way, if it = 100, we do a break Select a Choice:<br> 1. Adjust Status Value (+60) // $choice = 1<br> 2. Show all Ships <br> // $choice = 2 3. Show Ships in Port <br> // $choice = 3 ... 0. $choice="100" // if the value =100, quit this part Use either case (switch) or if/else statements to run the users choice1 If the choice is 1, then run the "Select" statement with the variable of $sql1 -- "SELECT .... If the choice is 2, then run the "Select" statement with the variable of $sql2 --- SELECT * FROM Ships If the choice is 3, then run the "Select" statement with the variable of $sql3 <br> .... If the choice is 0, then we are done. I figured the (3) statements would be assigned in php as: $sql1="...". $sql2="SELECT * FROM Ships" $sql3="SELECT * FROM Ships WHERE nPort="1" My idea was to use the switch statement, but got lost on it. :( I would like the options to be available over and over again, until a variable ($choice) is selected. In which case, this particular page is done and goes back to the "Main Menu"? The coding and display, if I use it, I can do. Just not sure how to write the way to select which one I want. It is possible that I would run all of the queries, and other times, only one, so that is why I would like the choice. An area I get confused in is the proper forms to use such as -- ' ' " " and ...?? Not sure the # of options I will end up with, but it will be more than 5 but less than 20 / page. So if I get the system down for 2-3 choices, I can replicate it for as many as I may need. And, as always, if a better way exists, I am willing to try it. Thanks again... Larry

    Read the article

  • Algorithm: Determine shape of two sectors delineated by an arbitrary path, and then fill one.

    - by Arseniy Banayev
    NOTE: This is a challenging problem for anybody who likes logic problems, etc. Consider a rectangular two-dimensional grid of height H and width W. Every space on the grid has a value, either 0 1 or 2. Initially, every space on the grid is a 0, except for the spaces along each of the four edges, which are initially a 2. Then consider an arbitrary path of adjacent (horizontally or vertically) grid spaces. The path begins on a 2 and ends on a different 2. Every space along the path is a 1. The path divides the grid into two "sectors" of 0 spaces. There is an object that rests on an unspecified 0 space. The "sector" that does NOT contain the object must be filled completely with 2. Define an algorithm that determines the spaces that must become 2 from 0, given an array (list) of values (0, 1, or 2) that correspond to the values in the grid, going from top to bottom and then from left to right. In other words, the element at index 0 in the array contains the value of the top-left space in the grid (initially a 2). The element at index 1 contains the value of the space in the grid that is in the left column, second from the top, and so forth. The element at index H contains the value of the space in the grid that is in the top row but second from the left, and so forth. Once the algorithm finishes and the empty "sector" is filled completely with 2s, the SAME algorithm must be sufficient to do the same process again. The second (and on) time, the path is still drawn from a 2 to a different 2, across spaces of 0, but the "grid" is smaller because the 2s that are surrounded by other 2s cannot be touched by the path (since the path is along spaces of 0). I thank whomever is able to figure this out for me, very very much. This does not have to be in a particular programming language; in fact, pseudo-code or just English is sufficient. Thanks again! If you have any questions, just leave a comment and I'll specify what needs to be specified.

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • Sitecore E-Commerce Module - Discount/Promotional Codes

    - by Zachary Kniebel
    I am working on a project for which I must use Sitecore's E-Commerce Module (and Sitecore 6.5 rev. 120706 - aka 'Update 5') to create a web-store. One of the features that I am trying to implement is a generic promotional/discount code system - customer enters a code at checkout which grants a discount like 'free shipping', '20% off', etc. At the moment, I am looking for some guidance (a high-level solution, a few pseudo-ideas, some references to review, etc) as to how this can be accomplished. Summary: What I am looking for is a way to detect whether or not the user entered a promo code at a previous stage in the checkout line, and to determine what that promo code is, if they did. Progress Thus Far: I have thoroughly reviewed all of the Sitecore E-Commerce Services (SES) documentation, especially "SES Order Line Extension" documentation (which I believe will have to be modified/extended in order to accomplish this task). Additionally, I have thoroughly reviewed the Sitecore Community article Extending Sitecore E-Commerce - Pricing and believe that it may be a useful guide for applying a discount statically, but does not say much in the way of applying a discount dynamically. After reviewing these documents, I have come up with the following possible high-level solution to start from: I create a template to represent a promotional code, which holds all data relevant to the promotion (percent off, free shipping, code, etc). I then create another template (based on the Product Search Group template) that holds a link to an item within a global "Promotional Code" items folder. Next, I use the Product Search Group features of my new template to choose which products to apply the discount to. In the source code for the checkout I create a class that checks if a code has been entered and, if so, somehow carry it through the rest of the checkout process. This is where I get stuck. More Details: No using cookies No GET requests No changing/creating/deleting items in the Sitecore Database during the checkout process (e.g., no manipulation of fields of a discount item during checkout to signal that the discount has been applied) must stay within the scope of C# Last Notes: I will update this post with any more information that I find/progress that I make. I upgrade all answers that are relevant and detailed, thought-provoking, or otherwise useful to me and potentially useful to others, in addition to any high-level answers that serve as a feasible solution to this problem; even if your idea doesn't help me, if I think it will help someone else I will still upgrade it. Thanks, in advance, for all your help! :)

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Looping through array in PHP to post several multipart form-data

    - by Léon Pelletier
    I'm trying in an asp web application to code a function that would loop through a list of files in a multiple upload form and send them one by one. Is this something that can be done in ASP? Because I've read some posts about how to attach several files together, but saw nothing about looping through the files. I can easily imagine it in C# via HttpWebRequest or with socket, but in php, I guess there are already function designed to handle it? // This is false/pseudo-code :) for (int index = 0; index < number_of_files; index++) { postfile(file[index]); } And in each iteration, it should send a multipart form-data POST. postfile(TheFileInfos) should make a POST like it: POST /afs.aspx?fn=upload HTTP/1.1 [Header stuff] Content-Type: multipart/form-data; boundary=----------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 [Header stuff] ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="Filename" myimage1.png ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="fileid" 58e21ede4ead43a5201206101806420000007667212251 ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="Filedata"; filename="myimage1.png" Content-Type: application/octet-stream [Octet Stream] [Edit] I'll try it: <html> <head> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body> <form name="form1" enctype="multipart/form-data" method="post" action="processFiles.php"> <p> <? // start of dynamic form $uploadNeed = $_POST['uploadNeed']; for($x=0;$x<$uploadNeed;$x++){ ?> <input name="uploadFile<? echo $x;?>" type="file" id="uploadFile<? echo $x;?>"> </p> <? // end of for loop } ?> <p><input name="uploadNeed" type="hidden" value="<? echo $uploadNeed;?>"> <input type="submit" name="Submit" value="Submit"> </p> </form> </body> </html>

    Read the article

  • How to query across many-to-many association in NHibernate?

    - by Splash
    I have two entities, Post and Tag. The Post entity has a collection of Tags which represents a many-to-many join between the two (that is, each post can have any number of tags and each tag can be associated with any number of posts). I am trying to retrieve all Posts which have a given tag. However, I seem to be unable to get this query right. I essentially want something which means the same as the following pseudo-HQL: from Posts p where p.Tags contains (from Tags t where t.Name = :tagName) order by p.DateTime The only thing I've found which even approaches this is a post by Ayende. However, his approach requires the entity on the other side (in my case, Tag) to have a collection showing the other end of the many-to-many. I don't have this and don't really wish to have it. I find it hard to believe this can't be done. What am I missing? My entities & mappings look like this (simplified): public class Post { public virtual int Id { get; set; } public virtual string Title { get; set; } private IList<Tag> tags = new List<Tag>(); public virtual IEnumerable<Tag> Tags { get { return tags; } } public virtual void AddTag(Tag tag) { this.tags.Add(tag); } } public class PostMap : ClassMap<Post> { public PostMap() { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Title); HasManyToMany(x => x.Tags); } } // ---- public class Tag { public virtual int Id { get; set; } public virtual string Name { get; set; } } public class TagMap : ClassMap<Tag> { public TagMap () { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Name).Unique(); } }

    Read the article

  • How do I make a GUI that behaves like this?

    - by Karl Knechtel
    This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via Dock = DockStyle.Top)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout.

    Read the article

  • Publishing/subscribing multiple subsets of the same server collection

    - by matb33
    How does one go about publishing different subsets (or "views") of a single collection on the server as multiple collections on the client? Here is some pseudo-code to help illustrate my question: items collection on the server Assume that I have an items collection on the server with millions of records. Let's also assume that: 50 records have the enabled property set to true, and; 100 records have the processed property set to true. All others are set to false. items: { "_id": "uniqueid1", "title": "item #1", "enabled": false, "processed": false }, { "_id": "uniqueid2", "title": "item #2", "enabled": false, "processed": true }, ... { "_id": "uniqueid458734958", "title": "item #458734958", "enabled": true, "processed": true } Server code Let's publish two "views" of the same server collection. One will send down a cursor with 50 records, and the other will send down a cursor with 100 records. There are over 458 million records in this fictitious server-side database, and the client does not need to know about all of those (in fact, sending them all down would probably take several hours in this example): var Items = new Meteor.Collection("items"); Meteor.publish("enabled_items", function () { // Only 50 "Items" have enabled set to true return Items.find({enabled: true}); }); Meteor.publish("processed_items", function () { // Only 100 "Items" have processed set to true return Items.find({processed: true}); }); Client code In order to support the latency compensation technique, we are forced to declare a single collection Items on the client. It should become apparent where the flaw is: how does one differentiate between Items for enabled_items and Items for processed_items? var Items = new Meteor.Collection("items"); Meteor.subscribe("enabled_items", function () { // This will output 50, fine console.log(Items.find().count()); }); Meteor.subscribe("processed_items", function () { // This will also output 50, since we have no choice but to use // the same "Items" collection. console.log(Items.find().count()); }); My current solution involves monkey-patching _publishCursor to allow the subscription name to be used instead of the collection name. But that won't do any latency compensation. Every write has to round-trip to the server: // On the client: var EnabledItems = new Meteor.Collection("enabled_items"); var ProcessedItems = new Meteor.Collection("processed_items"); With the monkey-patch in place, this will work. But go into Offline mode and changes won't appear on the client right away -- we'll need to be connected to the server to see changes. What's the correct approach?

    Read the article

  • JNA array structure

    - by Burny
    I want to use a dll (IEC driver) in Java, for that I am using JNA. The problem in pseudo code: start the server allocate new memory for an array (JNA) client connect writing values from an array to the memory sending this array to the client client disconnect new client connect allocate new memory for an array (JNA) - JVM crash (EXCEPTION_ACCESS_VIOLATION) The JVM crash not by primitve data types and if the values will not writing from the array to the memory. the code in c: struct DataAttributeData CrvPtsArrayDAData = {0}; CrvPtsArrayDAData.ucType = DATATYPE_ARRAY; CrvPtsArrayDAData.pvData = XYValDAData; XYValDAData[0].ucType = FLOAT; XYValDAData[0].uiBitLength = sizeof(Float32)*8; XYValDAData[0].pvData = &(inUpdateValue.xVal); XYValDAData[1].ucType = FLOAT; XYValDAData[1].uiBitLength = sizeof(Float32)*8; XYValDAData[1].pvData = &(inUpdateValue.yVal); Send(&CrvPtsArrayDAData, 1); the code in Java: DataAttributeData[] data_array = (DataAttributeData[]) new DataAttributeData() .toArray(d.bitLength); for (DataAttributeData d_temp : data_array) { d_temp.data = new Memory(size / 8); d_temp.type = type_iec; d_temp.bitLength = size; d_temp.write(); } d.data = data_array[0].getPointer(); And then writing values whith this code: for (int i = 0; i < arraySize; i++) { DataAttributeData dataAttr = new DataAttributeData(d.data.share(i * d.size())); dataAttr.read(); dataAttr.data.setFloat(0, f[i]); dataAttr.write(); } the struct in c: struct DataAttributeData{ unsigned char ucType; int iArrayIndex; unsigned int uiBitLength; void * pvData;}; the struct in java: public static class DataAttributeData extends Structure { public DataAttributeData(Pointer p) { // TODO Auto-generated constructor stub super(p); } public DataAttributeData() { // TODO Auto-generated constructor stub super(); } public byte type; public int iArrayIndex; public int bitLength; public Pointer data; @Override protected List<String> getFieldOrder() { // TODO Auto-generated method stub return Arrays.asList(new String[] { "type", "iArrayIndex", "bitLength", "data" }); } } Can anybody help me?

    Read the article

  • How to get all captures of subgroup matches with preg_match_all()?

    - by hakre
    Update/Note: I think what I'm probably looking for is to get the captures of a group in PHP. Referenced: PCRE regular expressions using named pattern subroutines. (Read carefully:) I have a string that contains a variable number of segments (simplified): $subject = 'AA BB DD '; // could be 'AA BB DD CC EE ' as well I would like now to match the segments and return them via the matches array: $pattern = '/^(([a-z]+) )+$/i'; $result = preg_match_all($pattern, $subject, $matches); This will only return the last match for the capture group 2: DD. Is there a way that I can retrieve all subpattern captures (AA, BB, DD) with one regex execution? Isn't preg_match_all suitable for this? This question is a generalization. Both the $subject and $pattern are simplified. Naturally with such the general list of AA, BB, .. is much more easy to extract with other functions (e.g. explode) or with a variation of the $pattern. But I'm specifically asking how to return all of the subgroup matches with the preg_...-family of functions. For a real life case imagine you have multiple (nested) level of a variant amount of subpattern matches. Example This is an example in pseudo code to describe a bit of the background. Imagine the following: Regular definitions of tokens: CHARS := [a-z]+ PUNCT := [.,!?] WS := [ ] $subject get's tokenized based on these. The tokenization is stored inside an array of tokens (type, offset, ...). That array is then transformed into a string, containing one character per token: CHARS -> "c" PUNCT -> "p" WS -> "s" So that it's now possible to run regular expressions based on tokens (and not character classes etc.) on the token stream string index. E.g. regex: (cs)?cp to express one or more group of chars followed by a punctuation. As I now can express self-defined tokens as regex, the next step was to build the grammar. This is only an example, this is sort of ABNF style: words = word | (word space)+ word word = CHARS+ space = WS punctuation = PUNCT If I now compile the grammar for words into a (token) regex I would like to have naturally all subgroup matches of each word. words = (CHARS+) | ( (CHARS+) WS )+ (CHARS+) # words resolved to tokens words = (c+)|((c+)s)+c+ # words resolved to regex I could code until this point. Then I ran into the problem that the sub-group matches did only contain their last match. So I have the option to either create an automata for the grammar on my own (which I would like to prevent to keep the grammar expressions generic) or to somewhat make preg_match working for me somehow so I can spare that. That's basically all. Probably now it's understandable why I simplified the question. Related: pcrepattern man page Get repeated matches with preg_match_all()

    Read the article

  • How to bridge Debian guest VM to VPN via Cisco AnyConnect Client running on Windows Vista host

    - by bgoodr
    I am running Cisco Anyconnect VPN Client version 2.5.3054 on a laptop running Windows Vista Home Premium (version 6.0.6002) Service Pack 2. I am running the VMware Player version 4.0.2 build-591240. The host operating system running under VMware Player is Debian 6.0.2.1 i386. The laptop is connected to a wireless connection, and I can browse the web from Windows Vista using Firefox just fine. I am able to boot into the Debian VM and open up a browser and access websites on the WAN from within the VM just fine. I can ping real Linux hosts on my LAN via: ping <lan_system>.local where <lan_system> is the hostname returned from uname -a on that system on my LAN. From a DOS CMD shell, I am able to ping hosts that exist on the remote network served by the Cisco AnyConnect Client's VPN network (and without the .local suffix applied as above): ping <remote_system> However, from within the Debian VM, I expect to be able to also ping those same remote hosts (<remote_system>) that are tunnelled over the VPN set up by Cisco AnyConnect Client. Let's say that I can ping a <remote_system> called flubber from a DOS CMD prompt just fine. When I execute Linux ping command from inside the Debian VM via: ping flubber It returns immediately with this output: ping: unknown host flubber For reference since I suspect it will be useful, here is the output of the route print command from the DOS CMD prompt: route print =========================================================================== Interface List 30 ...00 05 9a 3c 7a 00 ...... Cisco AnyConnect VPN Virtual Miniport Adapter for Windows 11 ...00 1b 9e c4 de e5 ...... Atheros AR5007EG Wireless Network Adapter 26 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 28 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 1 ........................... Software Loopback Interface 1 12 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 13 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 32 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4 27 ...00 00 00 00 00 00 00 e0 isatap.{E5292CF6-4FBB-4320-806D-A6B366769255} 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 20 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #8 22 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #10 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #11 25 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #12 29 ...00 00 00 00 00 00 00 e0 isatap.{C3852986-5053-4E2E-BE60-52EA2FCF5899} 41 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #14 =========================================================================== At the top window border of the VM, clicking on Virtual Machine, then clicking on Virtual Machine Settings, then clicking on Network Adapter, I have these two options checked: [X] Bridged: Connected directly to the physical Network [X] Replicate physical network connection state [ ] NAT: used to share the hosts's IP address [ ] Host-only: A private network shared with the host [ ] LAN segment: [ ] <LAN Segments...> <Advanced> I've toyed with the other options such as NAT and Host-only but that had no effect. Is there some way to allow the VM to access those <remote_system>'s?

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41  | Next Page >