Search Results

Search found 1546 results on 62 pages for 'lazy sequences'.

Page 15/62 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Lazy Registration: How to let a guest user start their workflow and prompt registration when they tr

    - by Brandon Cordell
    I'm wondering what I would do to go about letting a guest use my web application without registering, then if they attempt to save their work they are prompted with a registration. This will be in a rails application by the way. Can I just allow public access to part of the work flow, then when they save check if they're a registered user (by session value, or cookie?). If they aren't a registered user, save all their work into the session and let them fill out a sign out form. On successful registration automatically log them in and initiate the create on the db?

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • some unclear php "symbolics"

    - by serhio
    I am a php beginner and seed on the forum such a php expression: $regex = <<<'END' / ( [\x00-\x7F] # single-byte sequences 0xxxxxxx | [\xC0-\xDF][\x80-\xBF] # double-byte sequences 110xxxxx 10xxxxxx | [\xE0-\xEF][\x80-\xBF]{2} # triple-byte sequences 1110xxxx 10xxxxxx * 2 | [\xF0-\xF7][\x80-\xBF]{3} # quadruple-byte sequence 11110xxx 10xxxxxx * 3 ) | ( [\x80-\xBF] ) # invalid byte in range 10000000 - 10111111 | ( [\xC0-\xFF] ) # invalid byte in range 11000000 - 11111111 /x END; is this code correct? what should mean some strange (4 me) constructions like <<< 'END' / /x and END; thanks

    Read the article

  • Double-Escaped Unicode Javascript Issue

    - by Jeffrey Winter
    I am having a problem displaying a Javascript string with embedded Unicode character escape sequences (\uXXXX) where the initial "\" character is itself escaped as "&#92;" What do I need to do to transform the string so that it properly evaluates the escape sequences and produces output with the correct Unicode character? For example, I am dealing with input such as: "this is a &#92;u201ctest&#92;u201d"; attempting to decode the "&#92;" using a regex expression, e.g.: var out = text.replace('/&#92;/g','\'); results in the output text: "this is a \u201ctest\u201d"; that is, the Unicode escape sequences are displayed as actual escape sequences, not the double quote characters I would like.

    Read the article

  • Emacs: print key binding for a command or list all key bindings

    - by Yktula
    In Emacs (GNU 23.2, *nix), how can I: list the key sequences bound to a particular command? For example, how can we list all the key sequences that execute save-buffers-kill-emacs, with the output of key sequences bound to it? Assuming we can do this, listing the key sequences bound to goto-line should print the output: M-g g on a default install. list all key-bindings? Does C-h b do this? Would it print my own bindings? I am aware that executing the command directly can print a key sequence it can be activated with, but it doesn't always do so, and a few things happen, including: (1) the output doesn't remain for long, (2) the command is executed. I want a command that lists for me (preferably all) the bindings attached to a given command, without executing the command, or something like that.

    Read the article

  • Utility Queries–Structure of Tables with Identity Column

    - by drsql
    I have been doing a presentation on sequences of late (last planned version of that presentation was last week, but should be able to get the gist of things from the slides and the code posted here on my presentation page), and as part of that, I started writing some queries to interrogate the structure of tables. I started with tables using an identity column for some purpose because they are considerably easier to do than sequences, specifically because the limitations of identity columns make...(read more)

    Read the article

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • Are there any surveys on to what degree developers like or hate scrum ?

    - by dparnas
    Background: During a conference an analyst pointed out in a tweet that developers hate scrum. Myself and another person responded that this was not the case, and started discussing different scenarios on why developers would dislike scrum. One of the scenarios where that lazy developers are not able to hide in a scrum project. They are constantly challenged by the team to contribute. This discussion resulted in a blog post and video http://elsewhat.com/2010/05/20/lazy-developers-hate-agile-and%C2%A0scrum/ I've gotten three comments which I've tried to answer in a neutral way, but they comments do point out that there are some people who loathe scrum (and I am always 100% certain they are not lazy developers). Question Have there ever been a survey among developers on to what degree developers like or hate scrum ?

    Read the article

  • Are there any surveys on to what degree developers like or hate scrum?

    - by dparnas
    Background: During a conference an analyst pointed out in a tweet that developers hate scrum. Myself and another person responded that this was not the case, and started discussing different scenarios on why developers would dislike scrum. One of the scenarios where that lazy developers are not able to hide in a scrum project. They are constantly challenged by the team to contribute. This discussion resulted in a blog post and video http://elsewhat.com/2010/05/20/lazy-developers-hate-agile-and%C2%A0scrum/ I've gotten three comments which I've tried to answer in a neutral way, but they comments do point out that there are some people who loathe scrum (and I am always 100% certain they are not lazy developers). Question Have there ever been a survey among developers on to what degree developers like or hate scrum ?

    Read the article

  • How would one run a task sequence within a task sequence in SCCM 2012 SP1

    - by BigHomie
    A Shining Example: Inside all of my task sequences I have a group that installs driver packages conditionally based on computer model: And of course, this list does nothing but grow. The fact that it grows isn't a big deal, what is a big deal is that every time it changes I have to manually copy and paste those changes across every task sequence I have, which of course leaves huge room for human error. The same goes for other groups of tasks that are common across task sequences. Looking for a solution where I could centrally manage these tasks, be it link other task sequences to a group within another task sequence, or create a separate task sequence and link to that. I came across a solution by John Marcum (SCCM MVP) that mentioned this ability, but this was a while ago and I can't find the link to it anymore to see if it's even still being updated/maintained, but I'm looking for more of a free solution, or even using Powershell or the ConfigMgr SDK is fine with me, I'm no stranger to either. Update Getting close: http://msdn.microsoft.com/en-us/library/jj217869.aspx

    Read the article

  • Types of quotes for an HTML templating language

    - by Ralph
    I'm developing a templating language, and now I'm trying to decide on what I should do with quotes. I'm thinking about having 3 different types of quotes which are all handled differently: backtick ` double quote " single quote ' expand variables ? yes no escape sequences no yes ? escape html no yes yes Backticks Backticks are meant to be used for outputting JavaScript or unescaped HTML. It's often handy to be able to pass variables into JS, but it could also cause issues with things being treated as variables that shouldn't. My variables are PHP-style ($var) so I'm thinking that might mess with jQuery pretty bad... but if I disable variable expansion w/ backticks then, I'm not sure how would insert a variable into a JS code block? Single Quotes Not sure if escape sequences like \n should be treated as literals or converted. I find it pretty rare that I want to disable escape sequences, but if you do, you could use backticks. So I'm leaning towards "yes" for this one, but that would be contrary to how PHP does it. Double Quotes Pretty certain I want everything enabled for this one. Modifiers I'm also thinking about adding modifiers like @ or r in front of the string that would change some of these options to enable a few more combinations. I would need 9 different quotes or 3 quotes and 2 modifiers to get every combination wouldn't I? My language also supports "filters" which can be applied against any "term" (number, variable, string) so you could always write something like "blah blah $var blah"|expandvars Or "my string"|escapehtml Thoughts? What would you prefer? What would be least confusing/most intuitive?

    Read the article

  • How to compute a unicode string which bidirectional representation is specified?

    - by valdo
    Hello, fellows. I have a rather pervert question. Please forgive me :) There's an official algorithm that describes how bidirectional unicode text should be presented. http://www.unicode.org/reports/tr9/tr9-15.html I receive a string (from some 3rd-party source), which contains latin/hebrew characters, as well as digits, white-spaces, punctuation symbols and etc. The problem is that the string that I receive is already in the representation form. I.e. - the sequence of characters that I receive should just be presented from left to right. Now, my goal is to find the unicode string which representation is exactly the same. Means - I need to pass that string to another entity; it would then render this string according to the official algorithm, and the result should be the same. Assuming the following: The default text direction (of the rendering entity) is RTL. I don't want to inject "special unicode characters" that explicitly override the text direction (such as RLO, RLE, etc.) I suspect there may exist several solutions. If so - I'd like to preserve the RTL-looking of the string as much as possible. The string usually consists of hebrew words mostly. I'd like to preserve the correct order of those words, and characters inside those words. Whereas other character sequences may (and should) be transposed. One naive way to solve this is just to swap the whole string (this takes care of the hebrew words), and then swap inside it sequences of non-hebrew characters. This however doesn't always produce correct results, because actual rules of representation are rather complex. The only comprehensive algorithm that I see so far is brute-force check. The string can be divided into sequences of same-class characters. Those sequences may be joined in random order, plus any of them may be reversed. I can check all those combinations to obtain the correct result. Plus this technique may be optimized. For instance the order of hebrew words is known, so we only have to check different combinations of their "joining" sequences. Any better ideas? If you have an idea, not necessarily the whole solution - it's ok. I'll appreciate any idea. Thanks in advance.

    Read the article

  • In Objective C, what's the best way to extract multiple substrings of text around multiple patterns?

    - by Matt
    For one NSString, I have N pattern strings. I'd like to extract substrings "around" the pattern matches. So, if i have "the quick brown fox jumped over the lazy dog" and my patterns are "brown" and "lazy" i would like to get "quick brown fox" and "the lazy dog." However, the substrings don't necessarily need to be delimited by whitespace. I have a hunch that there's a very easy solution to this, but I admit a disturbing lack of knowledge of Objective C string functions.

    Read the article

  • How to make every word in the text clickable and send it to the script

    - by fakson
    I have a text for example "The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog." When i click on word i must get data from XML or from mysql about this word. How i can make every word active for click and send it to another script for example: I click on dog, and in new window i get information about dog? on fox about fox? every word must be clickable Any ideas, links or examples? Using php, mysql, jquery, ajax

    Read the article

  • Deferred execution and eager evaluation

    - by babu M
    Hi Could you please give me an example for Deferred execution with eager evaluation in C#? I read from MSDN that deferred execution in LINQ can be implemented either with lazy or eager evaluation...i could find examples in the internet for Deferred execution with lazy evaluation ,however i could not find any example for Deferred execution with eager evaluation....please help me....its urgent... Moreover,how deferred execution differs from lazy evaluation?In my point of view,both are looking same.Could you please provide any example for this too?

    Read the article

  • How to compare 2 similar strings letter by letter and highlight the differences?

    - by PowerUser
    We have 2 databases that should have matching tables. I have an (In-Production) report that compares these fields and displays them to the user in an MS-Access form (continuous form style) for correction. This is all well and good except it can be difficult to find the differences. How can I format these fields to bold/italicize/color the differences? "The lazy dog jumped over a brown fox." "The lazy dog jumped over the brown fox." (It's easier to see the differences between 2 similiar text fields once they are highlighted in some way) "The lazy dog jumped over a brown fox." "The lazy dog jumped over the brown fox. " Since we're talking about a form in MS Access, I don't have high hopes. But I know I'm not the first person to have this problem. Suggestions?

    Read the article

  • Linux tail only whole words

    - by Andrew
    Hello, I need to print last 20 characters of string, but only whole words. Delimiter is a space "". Let's consider this example: string="The quick brown fox jumps over the lazy dog" echo $string | tail -c20 returns s over the lazy dog. And I need it to return over the lazy dog instead. Do you know how to accomplish that? Thanks!

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • sybase - values from one table that aren't on another, on opposite ends of a 3-table join

    - by Lazy Bob
    Hypothetical situation: I work for a custom sign-making company, and some of our clients have submitted more sign designs than they're currently using. I want to know what signs have never been used. 3 tables involved: table A - signs for a company sign_pk(unique) | company_pk | sign_description 1 --------------------1 ---------------- small 2 --------------------1 ---------------- large 3 --------------------2 ---------------- medium 4 --------------------2 ---------------- jumbo 5 --------------------3 ---------------- banner table B - company locations company_pk | company_location(unique) 1 ------|------ 987 1 ------|------ 876 2 ------|------ 456 2 ------|------ 123 table C - signs at locations (it's a bit of a stretch, but each row can have 2 signs, and it's a one to many relationship from company location to signs at locations) company_location | front_sign | back_sign 987 ------------ 1 ------------ 2 987 ------------ 2 ------------ 1 876 ------------ 2 ------------ 1 456 ------------ 3 ------------ 4 123 ------------ 4 ------------ 3 So, a.company_pk = b.company_pk and b.company_location = c.company_location. What I want to try and find is how to query and get back that sign_pk 5 isn't at any location. Querying each sign_pk against all of the front_sign and back_sign values is a little impractical, since all the tables have millions of rows. Table a is indexed on sign_pk and company_pk, table b on both fields, and table c only on company locations. The way I'm trying to write it is along the lines of "each sign belongs to a company, so find the signs that are not the front or back sign at any of the locations that belong to the company tied to that sign." My original plan was: Select a.sign_pk from a, b, c where a.company_pk = b.company_pk and b.company_location = c.company_location and a.sign_pk *= c.front_sign group by a.sign_pk having count(c.front_sign) = 0 just to do the front sign, and then repeat for the back, but that won't run because c is an inner member of an outer join, and also in an inner join. This whole thing is fairly convoluted, but if anyone can make sense of it, I'll be your best friend.

    Read the article

  • Making a jQuery selection in IE on html added via .load()

    - by Joel Crawford-Smith
    Scenario: I am using jQuery to lazy load some html and change the relative href attributes of all the anchors to absolute links. The loading function adds the html in all browsers. The url rewrite function works on the original DOM in all browsers. But In IE7, IE8 I can't run that same function on the new lazy loaded html in the DOM. //lazy load a part of a file $(document).ready(function() { $('#tab1-cont') .load('/web_Content.htm #tab1-cont'); return false; }); //convert relative links to absolute links $("#tab1-cont a[href^=/]").each(function() { var hrefValue = $(this).attr("href"); $(this) .attr("href", "http://www.web.org" + hrefValue) .css('border', 'solid 1px green'); return false; }); I think my question is: whats the trick to getting IE to make selections on DOM that is lazy loaded with jQuery? This is my first post. Be gentle :-) Thanks, Joel

    Read the article

  • Using locks inside a loop

    - by Xaqron
    // Member Variable private readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile shared int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • How To Safely Eject Your USB Devices From the Desktop Context Menu

    - by Taylor Gibb
    If you are one of those people who don’t safely remove their USB Devices just because you’re lazy, here’s a neat trick to do it from the context menu on your desktop. Even if you are not lazy and just forget, the icon will serve as a mental reminder. So let’s take a look. How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone

    Read the article

  • NHibernate Pitfalls Index

    - by Ricardo Peres
    These are the posts on NHibernate pitfalls I’ve written so far. This post will be updated whenever there are more. The SaveOrUpdate Event Collection Restrictions Specifying Event Listeners in XML Configuration Many to Many and Inverse Bags and Join Lazy Properties in Non-Lazy Entities Adding to a Bag Causes Loading Flushing Changes Private Setter on Id Property

    Read the article

  • NHibernate Session Load vs Get when using Table per Hierarchy. Always use ISession.Get&lt;T&gt; for TPH to work.

    - by Rohit Gupta
    Originally posted on: http://geekswithblogs.net/rgupta/archive/2014/06/01/nhibernate-session-load-vs-get-when-using-table-per-hierarchy.aspxNHibernate ISession has two methods on it : Load and Get. Load allows the entity to be loaded lazily, meaning the actual call to the database is made only when properties on the entity being loaded is first accessed. Additionally, if the entity has already been loaded into NHibernate Cache, then the entity is loaded directly from the cache instead of querying the underlying database. ISession.Get<T> instead makes the call to the database, every time it is invoked. With this background, it is obvious that we would prefer ISession.Load<T> over ISession.Get<T> most of the times for performance reasons to avoid making the expensive call to the database. let us consider the impact of using ISession.Load<T> when we are using the Table per Hierarchy implementation of NHibernate. Thus we have base class/ table Animal, there is a derived class named Snake with the Discriminator column being Type which in this case is “Snake”. If we load This Snake entity using the Repository for Animal, we would have a entity loaded, as shown below: public T GetByKey(object key, bool lazy = false) { if (lazy) return CurrentSession.Load<T>(key); return CurrentSession.Get<T>(key); } var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), true); var snake = animal as Snake; snake is null As you can see that the animal entity retrieved from the database cannot be cast to Snake even though the entity is actually a snake. The reason being ISession.Load prevents the entity to be cast to Snake and will throw the following exception: System.InvalidCastException :  Message=Unable to cast object of type 'TPHAnimalProxy' to type 'NHibernateChecker.Model.Snake'. Thus we can see that if we lazy load the entity using ISession.Load<TPHAnimal> then we get a TPHAnimalProxy and not a snake. =============================================================== However if do not lazy load the same cast works perfectly fine, this is since we are loading the entity from database and the entity being loaded is not a proxy. Thus the following code does not throw any exceptions, infact the snake variable is not null: var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), false); var snake = animal as Snake; if (snake == null) { var snake22 = (Snake) animal; }

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >