Search Results

Search found 1032 results on 42 pages for 'repeated'.

Page 30/42 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Return and Save XML Object From Sharepoint List Web Service

    - by HurnsMobile
    I am trying to populate a variable with an XML response from an ajax call on page load so that on keyup I can filter through that list without making repeated get requests (think very rudimentary autocomplete). The trouble that I am having seems to be potentially related to variable scoping but I am fairly new to js/jQuery so I am not quite certain. The following code doesn't do anything on key up and adding alerts to it tells me that it is executing leadResults() on keyup and that the variable is returning an XML response object but it appears to be empty. The strange bit is that if I move the leadResults() call into the getResults() function the UL is populated with the results correctly. Im beating my head against the wall on this one, please help! var resultsXml; $(document).ready( function() { var leadLookupCaml = "<Query> \ <Where> \ <Eq> \ <FieldRef Name=\"Lead_x0020_Status\"/> \ <Value Type=\"Text\">Active</Value> \ </Eq> \ </Where> \ </Query>" $().SPServices({ operation: "GetListItems", webURL: "http://sharepoint/departments/sales", listName: "Leads", CAMLQuery: leadLookupCaml, CAMLRowLimit: 0, completefunc: getResults }); }) $("#lead_search").keyup( function(e) { leadResults(); }) function getResults(xData, status) { resultsXml = xData; } function leadResults() { xData = resultsXml; $("#lead_results li").remove(); $(xData.responseXML).find("z\\:row").each(function() { var selectHtml = "<li>" + "<a href=\"http://sharepoint/departments/sales/Lists/Lead%20Groups/DispForm.aspx?ID=" + $(this).attr("ows_ID") + ">" + $(this).attr("ows_Title")+" : " + $(this).attr("ows_Phone") + "</a>\ </li>"; $("#lead_results").append(selectHtml); }); }

    Read the article

  • How do I represent concurrent actions in jBPM, any of which can end a process?

    - by tpdi
    An example: a permit must be examined by two lawyers and one engineer. If any of those three reject it, the process enters a "rejected" end state. If all three grant the permit, it enters a "granted" end state. All three examiners may examine simultaneously, or in any order. Once one engineer has granted it, it shouldn't be available to be examined by an engineer; once two lawyers have examined it, it shouldn't be available to lawyers; once one engineer and two lawyers have examined it should go to the granted end state. My initial thinking is that either I have a overly complicated state transition diagram, with "the same" intermediate states multiply repeated, or I carry (external) state with the process { bool rejected; int engineerSignoffId; int lawyer1SignoffId; int lawyer2SignoffId}. Or something like this? If so, how does the engineer's rejection terminate the subprocess that is in "Lawyers"? START->FORK->Engineer->Granted?---------------->Y->JOIN-->Granted |->Lawyers-->Granted?->by 2 lawyers?->Y---^ ^ | |--------------------------N What's the canonical jBPM answer to this? Can you point me to examples or documentation of such answers? Thanks.

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • How can I do rapid application development with ASP.NET MVC?

    - by Erik Forbes
    I've been given a short amount of time (~80 hours to start with) to replace an existing Access database with a full-blown SQL + Web system, and I'm enumerating my options. I would like to use ASP.NET MVC, but I'm unsure of how to use it effectively with my short timetable. For the database backend I'll be using Linq to SQL as it's a product I already know and can get something working with it quickly. Does anyone have any experience with using ASP.NET MVC in this way and can share some insight? Edit: The reason I've been interested in ASP.NET MVC is because I know (100% confirmed) that there will be more work to do after this first round, and I'd like my maintenance work to be as easy as possible. In my experience Webforms applications tend to break down over repeated maintenance, despite discipline. Maybe there's a middle ground? How difficult would it to be for me to, say, build the app with Webforms, then migrate it to MVC later when I have more time budgeted to the project? Edit 2: Further background: the Access application I'm replacing is used in some capacity by everyone in the building, and since it was upgraded from Access 98 to 2003 it's been crashing daily, causing hours of lost productivity as people have to re-enter data since the last backup. This is the reason for the short amount of time - this is a critical business function, and they can't afford to keep re-entering data on a daily basis.

    Read the article

  • How to properly log out of facebook

    - by Gublooo
    This is a repeated question and I have followed both the suggestions provided in these StackOverflow links: How to log-out users using FaceBook connect in php and zend Trouble logging out of a FaceBook connect site and destroying sessions The issue is - the code works 90% of the time. Thats the weird part. Out of the 100 times I've logged in and out - I've experienced this problem 5-6 times and 2 of my beta test users have reported the same issue. So when it works- if u click the logout link - u get the facebook popup saying - you being logged out - when it does'nt work - absolutely nothing happens - the page does not refresh - it just sits on that page doing nothing. This is the javascript code that gets called on clicking logout function logout() { FB.Connect.get_status().waitUntilReady(function(status) { switch(status) { case FB.ConnectState.connected: FB.Connect.logoutAndRedirect("http://www.example.com/login/logout"); break; case FB.ConnectState.userNotLoggedIn: window.location = "http://www.example.com/login/logout"; break; } }); return false; } This is the php code: $this-_auth-clearIdentity(); $face = Zend_Registry::get('facebook'); $fb = new Facebook($face['appapikey'], $face['appsecret']); //$fb-clear_cookie_state(); $fb-expire_session(); Anyone experienced such sporadic issues. Thanks

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • Can't Display Bitmap of Higher Resolution than CDC area

    - by T. Jones
    Hi there dear gurus and expert coders. i am not gonna start with im a newbie and don't know much about image programming but unfortunately those are the facts :( I am trying to display an image from a bitmap pointer *ImageData which is of resolution 1392x1032. I am trying to draw that at an area of resolution or size 627x474. However, repeated trying doesnt seem to work. It works when I change the bitmap image I created from *ImageData width and height to resolution or size of around 627x474 I really do not know how to solve this after trying all possible solutions from various forums and google. pDC is a CDC* and memDC is a CDC initialized in an earlier method Anything uninitialized here was initialized in other methods. Here is my code dear humble gurus. Do provide me with guidance Yoda and Obi-Wan provided to Luke Skywalker. void DemoControl::ShowImage( void *ImageData ) { int Width; //Width of Image From Camera int Height; //Height of Image From Camera int m_DisplayWidth = 627 ;//width of rectangle area to display int m_DisplayHeight = 474;//height of rectangle area to display GetImageSize( &Width, &Height ) ; //this will return Width = 1392, Height 1032 CBitmap bitmap; bitmap.CreateBitmap(Width,Height,32,1,ImageData); CBitmap* pOldBitmap = memDC.SelectObject((CBitmap*)&bitmap); pDC->BitBlt(22, 24, 627, 474, &memDC, 0, 0, SRCCOPY); memDC.SelectObject((CBitmap*)pOldBitmap); ReleaseDC(pDC); }

    Read the article

  • jQuery - how to repeat a function within itself to include nested files

    - by brandonjp
    I'm not sure what to call this question, since it involves a variety of things, but we'll start with the first issue... I've been trying to write a simple to use jQuery for includes (similar to php or ssi) on static html sites. Whenever it finds div.jqinclude it gets attr('title') (which is my external html file), then uses load() to include the external html file. Afterwards it changes the class of the div to jqincluded So my main index.html might contain several lines like so: <div class="jqinclude" title="menu.html"></div> However, within menu.html there might be other includes nested. So it needs to make the first level of includes, then perform the same action on the newly included content. So far it works fine, but it's very verbose and only goes a couple levels deep. How would I make the following repeated function to continually loop until no more class="jqinclude" elements are left on the page? I've tried arguments.callee and some other convoluted wacky attempts to no avail. I'm also interested to know if there's another completely different way I should be doing this. $('div.jqinclude').each(function() { // begin repeat here var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); $(this).find('div.jqinclude').each(function() { // end repeat here var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); $(this).find('div.jqinclude').each(function() { var file = $(this).attr('title'); $(this).load(file, function() { $(this).removeClass('jqinclude').addClass('jqincluded'); }); }); }); }); }); });

    Read the article

  • VB.NET switching from ADO.NET to LINQ

    - by Cj Anderson
    I'm VERY new to Linq. I have an application I wrote that is in VB.NET 2.0. Works great, but I'd like to switch this application to Linq. I use ADO.NET to load XML into a datatable. The XML file has about 90,000 records in it. I then use the Datatable.Select to perform searches against that Datatable. The search control is a free form textbox. So if the user types in terms it searches instantly. Any further terms that are typed in continue to restrict the results. So you can type in Bob, or type in Bob Barker. Or type in Bob Barker Price is Right. The more criteria typed in the more narrowed your result. I bind the results to a gridview. Moving forward what all do I need to do? From a high level, I assume I need to: 1) Go to Project Properties -- Advanced Compiler Settings and change the Target framework to 3.5 from 2.0. 2) Add the reference to System.XML.Linq, Add the Imports statement to the classes. So I'm not sure what the best approach is going forward after that. I assume I use XDocument.Load, then my search subroutine runs against the XDocument. Do I just do the standard Linq query for this sort of repeated search? Like so: var people = from phonebook in doc.Root.Elements("phonebook") where phonebook.Element("userid") = "whatever" select phonebook; Any tips on how to best implement?

    Read the article

  • Display image background fully for last repeat in a div

    - by Stiggler
    I have a 700x300 background repeating seamlessly under the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a continuation-to-end of the background image, connecting seamlessly with the background above it. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • spring - constructor injection and overriding parent definition of nested bean

    - by mdma
    I've read the Spring 3 reference on inheriting bean definitions, but I'm confused about what is possible and not possible. For example, a bean that takes a collaborator bean, configured with the value 12 <bean name="beanService12" class="SomeSevice"> <constructor-arg index="0"> <bean name="beanBaseNested" class="SomeCollaborator"> <constructor-arg index="0" value="12"/> </bean> </constructor-arg> </bean> I'd then like to be able to create similar beans, with slightly different configured collaborators. Can I do something like <bean name="beanService13" parent="beanService12"> <constructor-arg index="0"> <bean> <constructor-arg index="0" value="13"/> </bean> </constructor> </bean> I'm not sure this is possible and, if it were, it feels a bit clunky. Is there a nicer way to override small parts of a large nested bean definition? It seems the child bean has to know quite a lot about the parent, e.g. constructor index. I'd prefer not to change the structure - the parent beans use collaborators to perform their function, but I can add properties and use property injection if that helps. This is a repeated pattern, would creating a custom schema help? Thanks for any advice!

    Read the article

  • problem writing xml to file with .net mvc - timeout?

    - by Mark
    Hey, so having an issue with writing out to an xml file. Works fine for single requests via the browser, but when I use something like Charles to perform 5-10 repeated requests concurrently several of them will fail. The trace simply shows a 500 error with no content inside, basically I think they start timing out waiting for write access or something... This method is inside my repository class, have also attempted to have repository instance as a singleton but doesn't appear to make any difference.. Any help would be much appreciated. Cheers public void Add(Request request) { try { XDocument requests; XmlReader xmlReader; using (xmlReader = XmlReader.Create(_requestsFilePath)) { requests = XDocument.Load(xmlReader); XElement xmlRequest = new XElement("request", new XElement("code", request.code), new XElement("date", request.date), new XElement("email", new XCData(request.email)), new XElement("name", new XCData(request.name)), new XElement("recieveOffers", request.recieveOffers) ); requests.Root.Element("requests").Add(xmlRequest); xmlReader.Close(); } requests.Save(_requestsFilePath); } catch (Exception ex) { HttpContext.Current.Trace.Warn("Error writing to file: "+ex); } }

    Read the article

  • How to handle dates that repeat indefinitely

    - by Addsy
    I am implementing a fairly simple calendar on a website using PHP and MySQL. I want to be able to handle dates that repeat indefinitely and am not sure of the best way to do it. For a time limited repeating event it seems to make sense to just add each event within the timeframe into my db table and group them with some form of recursion id. But when there is no limit to how often the event repeats, is it better to a) put records in the db for a specific time frame (eg the next 2 years) and then periodically check and add new records as time goes by - The problem with this is that if someone is looking 3 years ahead, the event won't show up b) not actually have records for each event but instead when i check in my php code for events within a specified time period, calculate wether a repeated event will occur within this time period - The problem with this is that it means there isn't a specific record for each event which i can see being a pain when i then want to associate other info (attendance etc) with that event. It also seems like it might be a bit slow Has anyone tried either of these methods? If so how did it work out? Or is there some other ingenious crafty method i'm missing?

    Read the article

  • PHP: Next Available Value in an Array, starting with a non-indexed value

    - by Erik Smith
    I've been stumped on this PHP issue for about a day now. Basically, we have an array of hours formatted in 24-hour format, and an arbitrary value ($hour) (also a 24-hour). The problem is, we need to take $hour, and get the next available value in the array, starting with the value that immediately proceeds $hour. The array might look something like: $goodHours = array('8,9,10,11,12,19,20,21). Then the hour value might be: $hour = 14; So, we need some way to know that 19 is the next best time. Additionally, we might also need to get the second, third, or fourth (etc) available value. The issue seems to be that because 14 isn't a value in the array, there is not index to reference that would let us increment to the next value. To make things simpler, I've taken $goodHours and repeated the values several times just so I don't have to deal with going back to the start (maybe not the best way to do it, but a quick fix). I have a feeling this is something simple I'm missing, but I would be so appreciative if anyone could shed some light. Erik

    Read the article

  • Condensing repeating JQuery code

    - by Craig Ward
    I have a page that has a large image on it with a number of thumbnails. When you mouse over a thumbnail the main image changes to the image you have just rolled your mouse over. The problem is the more thumbnails I have, the more repeated code I have. How could I reduce it? The Jquery code is as follows. <script type="text/javascript"> $('#thumb1') .mouseover(function(){ $('#big_img').fadeOut('slow', function(){ $('#big_img').attr('src', '0001.jpg'); $('#big_img').fadeIn('slow'); }); }); $('#thumb2') .mouseover(function(){ $('#big_img').fadeOut('slow', function(){ $('#big_img').attr('src', 'p_0002.jpg'); $('#big_img').fadeIn('slow'); }); }); $('#thumb3') .mouseover(function(){ $('#big_img').fadeOut('slow', function(){ $('#big_img').attr('src', '_img/p_0003.jpg'); $('#big_img').fadeIn('slow'); }); }); $('#thumb4') .mouseover(function(){ $('#big_img').fadeOut('slow', function(){ $('#big_img').attr('src', '0004.jpg'); $('#big_img').fadeIn('slow'); }); }); </script> #big_img = the ID of the full size image #thumb1, #thumb2, #thumb3, #thumb4 = The ID's of the thumbnails The main code for the page is PHP if that helps.

    Read the article

  • Play youtube video in full screen ?

    - by Madhup
    Hi all, First of all sorry, If somebody finds this question is repeated (haven't found any by myself). I am developing for an iPad application and trying to play youtube videos using this code: NSString *embedHTML = @"\ <html><head>\ <style type=\"text/css\">\ body {\ background-color: transparent;\ color: white;\ }\ </style>\ </head><body style=\"margin:0\">\ <embed id=\"yt\" src=\"%@\" type=\"application/x-shockwave-flash\" \ width=\"%0.0f\" height=\"%0.0f\"></embed>\ </body></html>"; NSString *html = [NSString stringWithFormat:embedHTML, youTubeUrl, 142.0, 129.5]; [wbView loadHTMLString:html baseURL:nil]; The code works fine when used in an iphone application (i.e. you touch on the webview and it starts playing the youtube video in fullscreen.) But when used in the iPad, on clicking the web view it starts playing the video in the web view itself and shows options to go to full screen, while I want to start the playback in the full screen from the beginning, like it does in the iPhone. Anybody having some ideas or people who have done it before please help. Thanks, Madhup

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Perl's Devel::LeakTrace::Fast Pointing to blank files and evals

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • ASP.NET MVC jQuery autocomplete with url.action helper in a script included in a page.

    - by Boob
    I have been building my first ASP.NET MVC web app. I have been using the jQuery autocomplete widget in a number of places like this: <head> $("#model").autocomplete({ source: '<%= Url.Action("Model", "AutoComplete") %>' }); </head> The thing is I have this jQuery code in a number of different places through my web app. So i thought I would create a seperate javascript script (script.js) where I could put this code and then just include it in the master page. Then i can put all these repeated pieces of code in that script and just call them where I need too. So I did this. My code is shown below: In the site.js script I put this function: function doAutoComplete() { $("#model").autocomplete({ source: '<%= Url.Action("Model", "AutoComplete") %>' }); } On the page I have: <head> <script src="../../Scripts/site.js" type="text/javascript"></script> doAutoComplete(); </head> But when I do this I get an Invalid Argument exception and the autocomplete doesnt work. What am I doing wrong? Any ideas?Do i need to pass something to the doAutoComplete function?

    Read the article

  • Custom model validation of dependent properties using Data Annotations

    - by Darin Dimitrov
    Since now I've used the excellent FluentValidation library to validate my model classes. In web applications I use it in conjunction with the jquery.validate plugin to perform client side validation as well. One drawback is that much of the validation logic is repeated on the client side and is no longer centralized at a single place. For this reason I'm looking for an alternative. There are many examples out there showing the usage of data annotations to perform model validation. It looks very promising. One thing I couldn't find out is how to validate a property that depends on another property value. Let's take for example the following model: public class Event { [Required] public DateTime? StartDate { get; set; } [Required] public DateTime? EndDate { get; set; } } I would like to ensure that EndDate is greater than StartDate. I could write a custom validation attribute extending ValidationAttribute in order to perform custom validation logic. Unfortunately I couldn't find a way to obtain the model instance: public class CustomValidationAttribute : ValidationAttribute { public override bool IsValid(object value) { // value represents the property value on which this attribute is applied // but how to obtain the object instance to which this property belongs? return true; } } I found that the CustomValidationAttribute seems to do the job because it has this ValidationContext property that contains the object instance being validated. Unfortunately this attribute has been added only in .NET 4.0. So my question is: can I achieve the same functionality in .NET 3.5 SP1? UPDATE: It seems that FluentValidation already supports clientside validation and metadata in ASP.NET MVC 2. Still it would be good to know though if data annotations could be used to validate dependent properties.

    Read the article

  • String operations

    - by NEO
    I am trying to find the most repeated word in a string. My code is as follows: public class Word { private String toWord; private int Count; public Word(int count, String word){ toWord = word; Count = count; } public static void main(String args[]){ String str="my name is neo and my other name is also neo because I am neo"; String []str1=str.split(" "); Word w1=new Word(0,str1[0]); LinkedList<Word> list = new LinkedList<Word>(); list.add(w1); ListIterator itr = list.listIterator(); for(int i=1;i<str1.length;i++){ while(itr.hasNext()){ if(str1[i].equalsTO(????)); else list.add(new Word(0,str1[i])); } How do I compare the string from string array str1 to the string stored in the linked list and then how do i increase the respective count. I will then print the string with the highest count, I dont know how to do that either.

    Read the article

  • Should I use two queries, or is there a way to JOIN this in MySQL/PHP?

    - by Jack W-H
    Morning y'all! Basically, I'm using a table to store my main data - called 'Code' - a table called 'Tags' to store the tags for each code entry, and a table called 'code_tags' to intersect it. There's also a table called 'users' which stores information about the users who submitted each bit of code. On my homepage, I want 5 results returned from the database. Each returned result needs to list the code's title, summary, and then fetch the author's firstname based on the ID of the person who submitted it. I've managed to achieve this much so far (woot!). My problem lies when I try to collect all the tags as well. At the moment this is a pretty big query and it's scaring me a little. Here's my problematic query: SELECT code.*, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id ORDER BY date DESC LIMIT 0, 5 What it returns is correct looking data, but several repeated rows for each tag. So for example if a Code entry has 3 tags, it will return an identical row 3 times - except in each of the three returned rows, the tag changes. Does that make sense? How would I go about changing this? Thanks! Jack

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >