Search Results

Search found 8219 results on 329 pages for 'less'.

Page 264/329 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Point covering problem

    - by Sean
    I recently had this problem on a test: given a set of points m (all on the x-axis) and a set n of lines with endpoints [l, r] (again on the x-axis), find the minimum subset of n such that all points are covered by a line. Prove that your solution always finds the minimum subset. The algorithm I wrote for it was something to the effect of: (say lines are stored as arrays with the left endpoint in position 0 and the right in position 1) algorithm coverPoints(set[] m, set[][] n): chosenLines = [] while m is not empty: minX = min(m) bestLine = n[0] for i=1 to length of n: if n[i][0] <= m and n[i][1] > bestLine[1] then bestLine = n[i] add bestLine to chosenLines for i=0 to length of m: if m <= bestLine[1] then delete m[i] from m return chosenLines I'm just not sure if this always finds the minimum solution. It's a simple greedy algorithm so my gut tells me it won't, but one of my friends who is much better than me at this says that for this problem a greedy algorithm like this always finds the minimal solution. For proving mine always finds the minimal solution I did a very hand wavy proof by contradiction where I made an assumption that probably isn't true at all. I forget exactly what I did. If this isn't a minimal solution, is there a way to do it in less than something like O(n!) time? Thanks

    Read the article

  • Is there unresizable space in latex? Pictures in good looking grid.

    - by drasto
    I've created latex macro to typeset guitar chords diagrams(using picture environment). Now I want to make diagrams of different appear in good looking grid when typeset one next to each other as the picture shows: The picture. (on the picture: Labeled "First" bad layout of diagrams, labeled "Second" correct layout when equal number of diagrams in line) I'm using \hspace to make some skips between diagrams, otherwise they would be too near to each other. As you can see in second case when latex arrange pictures in so that there is same number of them in each line it works. However if there is less pictures in the last line they become "shifted" to the right. I don't want this. I guess is because latex makes the space between diagrams in first line a little longer for the line to exactly fit the page width. How do I tell latex not to resize spaces created by \hspace ? Or is there any other way ? I guess I cannot use tables because I don't know how many diagrams will fit in one line... This is current state of code: \newcommand{\spaceForChord}{1.7cm} \newcommnad{\chordChart}[1]{% %calculate dimensions xdim and ydim according to setings \begin{picture}(xdim, ydim){% %draw the diagram inside defined area }% \hspace*{\spaceForChord}% \hspace*{-\xdim}% }% %end preambule and begin document \begin{document} First:\\* \\* \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} %...above line is repeated 12 more times to produce result shown at the picture \end{document} Thanks for any help.

    Read the article

  • Communicating with a running python daemon

    - by hanksims
    I wrote a small Python application that runs as a daemon. It utilizes threading and queues. I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health. In a nutshell, I'd like to be able to do something like this: python application.py start # launches the daemon Later, I'd like to be able to come along and do something like: python application.py check_queue_size # return info from the daemonized process To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals. Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it. UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up. Thanks again.

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • How to remove words based on a word count

    - by Chris
    Here is what I'm trying to accomplish. I have an object coming back from the database with a string description. This description can be up to 1000 characters long, but we only want to display a short view of this. So I coded up the following, but I'm having trouble in actually removing the number of words after the regular expression finds the total count of words. Does anyone have good way of dispalying the words which are less than the Regex.Matches? Thanks! if (!string.IsNullOrEmpty(myObject.Description)) { string original = myObject.Description; MatchCollection wordColl = Regex.Matches(original, @"[\S]+"); if (wordColl.Count < 70) // 70 words? { uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", myObject.Description); } else { string shortendText = original.Remove(200); // 200 characters? uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", shortendText); } }

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Help me sort programing languages a bit

    - by b-gen-jack-o-neill
    Hi, so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, becouse for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For exmaple, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these funtions would be similiar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actuall "showing" is done by OS. Where is the line between language functions and system API? Now languages I dont quite understand - Python, Ruby and similiar. To be more specific, I know they are similiar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.

    Read the article

  • Flex Sprite xy Coordinates

    - by Ian
    Hi, I have a drawing that looks more or less like the attached image. image The orange square is the currently selected selected sprite. The sprites are all draws from coordinates that i receive from XML. var sprObject:Sprite = new Sprite(); sprObject.graphics.beginFill(itemList.c.toString()); sprObject.name = strName; sprObject.graphics.moveTo(iX, iY); sprObject.graphics.lineTo(iX2, iY2); sprObject.graphics.lineTo(iX3, iY3); sprObject.graphics.lineTo(iX4, iY4); sprObject.graphics.lineTo(iX, iX); sprObject.graphics.endFill(); mainUI.addChild(sprObject); // mainUI is a mx:UIComponent g_Sprite.push(sprObject); // array of sprites. What I want to do is the following. If I'm currently on the orange square and I use my keyboard direction buttons (up/down/left/right). I want to deselect the current sprite and select the next sprite in the appropriate direction. The problem I'm having is that I cannot get the x and y coordinates of the drawn sprites. If I look in the array, the x and y coordinates of the sprites are all 0. If I can retrieve that I can write an algorithm to determine the next sprite to select. Any help would be appreciated.

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • How often should network traffic/collisions cause SNMP Sets to fail?

    - by A. Levy
    My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails. The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java. Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?

    Read the article

  • Where to start with map application

    - by rfders
    Hi there, i'm trying to desing a new application which allow user see he/her current location on a custom map (office, university compus, etc). but actually i have a couple of question in my mind (i haven't designed this kind of application before). i'm wondering: How can i draw my own maps, what is the best option for it? there any format that i have to care of, there are any specification about it ? Once i have my custom map. how can i do to mapping a global position system with the local positions ? What are the tricks behing zoom on maps ? just differents layers with more or less informations and those layers changes on users demand ? If a whant to mark some specific points over the map, like a cafeteria, boss's office etc, how can i do that ? Sorry if my questions are too much generics and dumb, but i really need some clues about this topic because i don't have any idea how to design this kind of application as best as possible. and we don't whant to reinvent the wheel. I will appreciate any help that you can provide me in order to desing this application

    Read the article

  • strtotime not working with TIME?

    - by Prashant
    My mysql column has this datetime value, 2011-04-11 11:00:00 when I am applying strtotime then its returning date less than today,whereas it should be greater than today. also when I am trying this strtotime(date('d/m/Y h:i A')); code, its returning wrong values. Is there any problem with giving TIME in strtotime? Basically I want to do, is to compare my mysql column date with today's date, if its in future then show "Upcoming" else show nothing? Please help and advise, what should I do? Edited code $_startdatetime = $rs['startdatetime']; $_isUpcoming = false; if(!empty($_startdatetime)){ $TEMP_strtime = strtotime($_startdatetime); $TEMP_strtime_today = strtotime(date('d/m/Y h:i A')); if($TEMP_strtime_today < $TEMP_strtime){ $_isUpcoming = true; $_startdatetime = date('l, d F, Y h:i A' ,$TEMP_strtime); } } And the value in $rs['startdatetime'] is 2011-04-11 11:00:00. And with this value I am getting following output. $TEMP_strtime - 1302519600 $TEMP_strtime_today - 1314908160 $_startdatetime - 2011-04-11 11:00:00 $_startdatetime its value is not formatted as the upcoming condition is false, so returning as is mysql value.

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • Algorithm to determine if array contains n...n+m?

    - by Kyle Cronin
    I saw this question on Reddit, and there were no positive solutions presented, and I thought it would be a perfect question to ask here. This was in a thread about interview questions: Write a method that takes an int array of size m, and returns (True/False) if the array consists of the numbers n...n+m-1, all numbers in that range and only numbers in that range. The array is not guaranteed to be sorted. (For instance, {2,3,4} would return true. {1,3,1} would return false, {1,2,4} would return false. The problem I had with this one is that my interviewer kept asking me to optimize (faster O(n), less memory, etc), to the point where he claimed you could do it in one pass of the array using a constant amount of memory. Never figured that one out. Along with your solutions please indicate if they assume that the array contains unique items. Also indicate if your solution assumes the sequence starts at 1. (I've modified the question slightly to allow cases where it goes 2, 3, 4...) edit: I am now of the opinion that there does not exist a linear in time and constant in space algorithm that handles duplicates. Can anyone verify this? The duplicate problem boils down to testing to see if the array contains duplicates in O(n) time, O(1) space. If this can be done you can simply test first and if there are no duplicates run the algorithms posted. So can you test for dupes in O(n) time O(1) space?

    Read the article

  • Syntax for finding structs in multisets - C++

    - by Sarah
    I can't seem to figure out the syntax for finding structs in containers. I have a multiset of Event structs. I'm trying to find one of these structs by searching on its key. I get the compiler error commented below. struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } bool operator > ( const Event & rhs ) const { return ( time > rhs.time ); } bool operator == ( const Event & rhs ) const { return ( time == rhs.time ); } double time; int eventID; int hostID; int s; }; typedef std::multiset< Event, std::less< Event > > EventPQ; EventPQ currentEvents; double oldRecTime = 20.0; EventPQ::iterator ceItr = currentEvents.find( EventPQ::key_type( oldRecTime ) ); // no matching function call I've tried a few permutations to no avail. I thought defining the conditional equality operator was going to be enough.

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • how to make it easy for users to register at my site?

    - by rob
    I want to make it dirt simple for users coming to my site to register so they can post comments, vote on things, etc. I would like for them to be able to use their facebook id, twitter id, yahoo mail id, gmail id, AIM id, msn id, or whatever else people are likely to have (not necessarily all of those, but the more the better). I want my mom to be able to do it in 30 seconds or less. (that is, no "enter your open id url here" type thing that would confuse her). I prefer they not have to pick a unique name, as that gets annoying as the site gets more users and it gets hard to find one that is unique. What is the best option here? I'm not quite sure about OpenId vs. OAuth, and whether there are other options. And I'd like it to be as simple for me, the developer, as possible (of course!). I don't want to spend forever learning some protocol, nor have to structure my whole app around this. It would be great if there was a site with sample code that is pretty easy to drop in. BTW, StackOverflow is a good example of a site that was easy for me to register for.

    Read the article

  • HTML columns or rows for form layout?

    - by Valera
    I'm building a bunch of forms that have labels and corresponding fields (input element or more complex elements). Labels go on the left, fields go on the right. Labels in a given form should all be a specific width so that the fields all line up vertically. There are two ways (maybe more?) of achieving this: Rows: Float each label and each field left. Put each label and field in a field-row div/container. Set label width to some specific number. With this approach labels on different forms will have different widths, because they'll depend on the width of the text in the longest label. Columns: Put all labels in one div/container that's floated left, put all fields in another floated left container with padding-left set. This way the labels and even the label container don't need to have their widths set, because the column layout and the padding-left will uniformly take care of vertically lining up all the fields. So approach #2 seems to be easier to implement (because the widths don't need to be set all the time), but I think it's also less object oriented, because a label and a field that goes with that label are not grouped together, as they are in approach #1. Also, if building forms dynamically, approach #2 doesn't work as well with functions like addRow(label, field), since it would have to know about the label and the field containers, instead of just creating/adding one field-row element. Which approach do you think is better? Is there another, better approach than these two?

    Read the article

  • iPhone AES encryption issue

    - by Dilshan
    Hi, I use following code to encrypt using AES. - (NSData*)AES256EncryptWithKey:(NSString*)key theMsg:(NSData *)myMessage { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [myMessage length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [myMessage bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } However the following code chunk returns null if I tried to print the encryptmessage variable. Same thing applies to decryption as well. What am I doing wrong here? NSData *encrData = [self AES256EncryptWithKey:theKey theMsg:myMessage]; NSString *encryptmessage = [[NSString alloc] initWithData:encrData encoding:NSUTF8StringEncoding]; Thank you

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • Visual C++ function suddenly 170 ms slower (4x longer)

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • Need help debugging a huge chunk of JSON data...

    - by meder
    I have a huge chunk, so large that I can't manually edit the file and need to read it in and do regex operations to see what's wrong. Basically - my server is PHP 5.1.6 and I can't update it. This features an older json_decode which is less featured than the 5.2/5.3 versions. json_decode returns NULL and json_last_error is being invoked but the function doesn't exist except in PHP 5.3 so I'm manually trying to see what's wrong. $regex = '#[^0-9"$a-zA-Z{:}().]#'; $json = preg_replace( $regex, '', $json ); $tree = json_decode ( $json, true ); var_dump($tree); // NULL A snippet of the JSON.. somewhere in the middle {"109":0,"103":1,"102":59,"101":70,"100":4299,"94":0,"50":51,"46":0,"45":0,"44":0,"43":0,"42":0,"23":0,"22":0,"18":0,"17":1,"16":1,"13":160,"8":4298}},"2":{"d":{"109":0,"103":92,"102":54,"101":53,"100":4301,"94":0,"50":4278,"49":328,"46":1,"45":0,"44":1,"43":0,"42":0,"26":0,"23":0,"22":0,"18":0,"17":1,"16":1,"8":4300},"m":{"94":1,"100":1,"26":1,"50":1,"8":1,"49":1,"18":1,"43":1,"42":1,"109":1},"c":{"\/":{"d":{"109":0,"100":4301,"94":0,"50":4278,"49":328,"43":0,"42":0,"26":0,"18":0,"8":4300}},"G":{"d":{"109":1,"100":4303,"94":1,"68":17,"50":64,"49":53,"43":1,"42":1,"34":0,"18":1,"13":2216,"11":0,"8":4302}}}},"3": The }}}} is suspicious but this probably just closes 4 nested object literals. Would appreciate any insight.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >