Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 158/2640 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Store the cache data locally

    - by Lu Lu
    Hello, I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView. My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model: Client create a thread to get data - thread complete and send event to client - client display data on datagridview on a form. However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly. So, I think about a cached data: Client <---get/add/edit/delete--- Cached Data ---get/add/edit/delete---Server (web service) Please give me some suggestions. Example: cached data should be developed in another application which is same host with client? Or cached data is running in client. Please give me some techniques to implement this solution. If having any examples, please give me. Thanks.

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • jquery fail to retrieve accurate data from sibling field.

    - by i need help
    wonder what's wrong <table id=tblDomainVersion> <tr> <td>Version</td> <td>No of sites</td> </tr> <tr> <td class=clsversion>1.25</td> <td><a id=expanddomain>3 sites</a><span id=spanshowall></span></td> </tr> <tr> <td class=clsversion>1.37</td> <td><a id=expanddomain>7 sites</a><span id=spanshowall></span></td> </tr> </table> $('#expanddomain').click(function() { //the siblings result incorrect //select first row will work //select second row will no response var versionforselected= $('#expanddomain').parent().siblings("td.clsversion").text(); alert(versionforselected); $.ajax({ url: "ajaxquery.php", type: "POST", data: 'version='+versionforselected, timeout: 900000, success: function(output) { output= jQuery.trim(output); $('#spanshowall').html(output); }, }); });

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • How to manage large amounts of delegates and usercallbacks in C# async http library

    - by Tyler
    I'm coding a .NET library in C# for communicating with XBMC via its JSON RPC interface using HTTP. I coded and released a preliminary version but everything is done synchronously. I then recoded the library to be asynchronous for my own purposes as I was/am building an XBMC remote for WP7. I now want to release the new async library but want to make sure it's nice and tidy before I do. Due to the async nature a user initiates a request, supplies a callback method that matches my delegate and then handles the response once it's been received. The problem I have is that within the library I track a RequestState object for the lifetime of the request, it contains the http request/response as well as the user callback etc. as member variables, this would be fine if only one type of object was coming back but depending on what the user calls they may be returned a list of songs or a list of movies etc. My implementation at the moment uses a single delegate ResponseDataRecieved which has a single parameter which is a simple Object - As this has only be used by me I know which methods return what and when I handle the response I cast said object to the type I know it really is - List, List etc. A third party shouldn't have to do this though - The delegate signature should contain the correct type of object. So then I need a delegate for every type of response data that can be returned to the third party - The specific problem is, how do I handle this gracefully internally - Do I have a bunch of different RequestState objects that each have a different member variable for the different delegates? That doesn't "feel" right. I just don't know how to do this gracefully and cleanly.

    Read the article

  • Silverlight 4 caching issue?

    - by DavidS
    I am currently experiencing a weird caching problem it would seem. When I load my data intially, I return all the data within given dates and my graph looks as follows: Then I filter the data to return a subset of the original data for the same date range (not that it matters) and I get the following view of my data: However, I intermittently get the following when I refresh the same filterd view of the data: One can see that not all the data gets cached but only some of it i.e. for 12 Dec 2010 and 5 dec 2010(not shown here). I've looked at my queries and the correct data is getting pulled out. It is only on the presentation layer i.e. on Mainpage.xaml.cs that this erroneous data seems to exist. I've stepped through the code and the data is corect through all the layers except on the presentation layer. Has anyone experienced this before? Is there some sort of caching going in the background that is keeping that data in the background as I've got browser caching off? I am using the LoadOperation in the callback method within the Load method of the DomainContext if that helps...

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • Can't Deploy or Upload Large SSRS 2008 Report from VS or IE

    - by Bratch
    So far in this project I have two reports in VS2008/BIDS. The first one contains 1 tablix and is about 100k. The second one contains 3 tablixes (tablices?) and is about 257k. I can successfully deploy the smaller report from VS and I can upload it from the Report Manager in IE. I can view/run it from Report Manager and I can get to the Report Server (web service) URL from my browser just fine. Everything is done over HTTPS and there is nothing wrong with the certificates. With the larger report, the error I get in VS is "The operation has timed out" after about 100 seconds. The error when I upload from IE is "The underlying connection was closed: An unexpected error occurred on a send" after about 130 seconds. In the RSReportServer.config file I tried changing Authentication/EnableAuthPersistence from true to false and restarting the service, but still get the error. I have the key "SecureConnectionLevel" set to 2. Changing this to 0 and turning off SSL is not going to be an option. I added a registry key named "MaxRequestBytes" to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters and set it to 5242880 (5MB) and restarted the HTTP and SRS services as suggested in a forum post by Jin Chen of MSFT. I still cannot upload the larger report. This is on MS SQL 2008 and WS 2003. Below is part of a log file entry from ...\Reporting Services\LogFiles when I attempted to upload from IE. library!WindowsService_0!89c!02/10/2010-07:57:57:: i INFO: Call to CleanBatch() ends ui!ReportManager_0-1!438!02/10/2010-07:59:33:: e ERROR: The underlying connection was closed: An unexpected error occurred on a send. ui!ReportManager_0-1!438!02/10/2010-07:59:34:: e ERROR: HTTP status code -- 500 -------Details-------- System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. --- System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. --- System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) --- End of inner exception stack trace --- ...

    Read the article

  • ASP.NET: Large number of Session_Start with same session id

    - by Jaap
    I'm running a ASP.NET website on my development box (.NET 2.0 on Vista/IIS7). The Session_Start method in global.asax.cs logs every call to a file (log4net). The Session_End method also logs every call. I'm using InProc session state, and set the session timeout to 5 mins (to avoid waiting for 20 mins). I hit the website, wait for 5 minutes unit I see the Session_End logging. Then I F5 the website. The browsers still has the session cookie and sends it to the server. Session_Start is called and a new session is created using the same session id (btw: I need this to be the same session id, because it is used to store data in database). Result: Every time I hit F5 on a previously ended session, the Session_Start method is called. When I open a different browser, the Session_Start method is called just once. Then after 5 minutes the Session_End each F5 causes the Session_Start method to execute. Can anyone explain why this is happening? Update: After the Session timeout, all subsequent requests have a session start & session end. So in the end my question is: why are the sessions on these subsequent request closed immediatly? 2010-02-09 14:49:08,754 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,754 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET http://localhost:80/js/settings.js 2010-02-09 14:49:08,756 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,760 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,760 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=core 2010-02-09 14:49:08,761 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,762 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,762 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /js/package.aspx?name=all 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,763 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=rest 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,765 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=vacation 2010-02-09 14:49:08,765 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq web.config relevant section: <system.web> <compilation debug="true" /> <sessionState timeout="2" regenerateExpiredSessionId="false" /> </system.web>

    Read the article

  • Is there a fast way to jump to element using XMLReader?

    - by Derk
    I am using XMLReader to read a large XML file with about 1 million elements on the level I am reading from. However, I've calculated it will take over 10 seconds when I jump to -for instance- element 500.000 using XMLReader::next ([ string $localname ] ) or XMLReader::read ( void ) This is not very usable. Is there a faster way to do this?

    Read the article

  • Importing wikipedia database dumb - kills navicat - anyone got any ideas?

    - by Ali
    Ok guys I've downloaded the wikipedia xml dump and its a whopping 12 GB of data :\ for one table and I wanted to import it into mysql databse on my localhost - however its a humongous file 12GB and obviously navicats taking its sweet time in importing it or its more likely its hanged :(. Is there a way to include this dump or atleast partially at most you know bit by bit. Let me correct that its 21 GB of data - not that it helps :\ - does any one have any idea of importing humongous files like this into MySQL database.

    Read the article

  • What method should be used for searching this mysql dataset?

    - by GeoffreyF67
    I've got a mysql dataset that contains 86 million rows. I need to have a relatively fast search through this data. The data I'll be searching through is all strings. I also need to do partial matches. Now, if I have 'foobar' and search for '%oob%' I know it'll be really slow - it has to look at every row to see if there is a match. What methods can be used to speed queries like this up? G-Man

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • Parsing Huge XML Files in PHP

    - by Ian
    I'm trying to parse the dmoz content/structures xml files into mysql, but all existing scripts to do this are very old and don't work well. How can I go about opening a large (+1GB) xml file in php for parsing?

    Read the article

  • Large Data Table with first column fixed

    - by bhavya_w
    I have structure as shown in the fiddle http://jsfiddle.net/5LN7U/. <section class="container"> <section class="field"> <ul> <li> Question 1 </li> <li> question 2 </li> <li> question 3 </li> <li> question 4 </li> <li> question 5 </li> <li> question 6 </li> <li> question 7 </li> </ul> </section> <section class="datawrap"> <section class="datawrapinner"> <ul> <li><b>Answer 1 :</b></li> <li><b>Answer 2 :</b></li> <li><b>Answer 3 :</b></li> <li><b>Answer 4 :</b></li> <li><b>Answer 5 :</b></li> <li><b>Answer 6 :</b></li> <li><b>Answer 7 :</b></li> </ul> </section> </section> </section> Basically its a table structure made using divs. The first column contains a long list of questions and the second column contains answers/multiple answers which can be quite big ( there has to be horizontal scrolling in the second column.) The problem i am facing is when i scroll downwards the second column which has the horizontal scroll bar is also scrolling downward. I want horizontal scrollbar to be fixed there. as in it should be always fixed there no matter how much i scroll vertically. Much Like Google Spreadsheets: where the first column stays fixed and there's horizontal scrolling on rest of the columns with over vertical scrolling for whole of the data. I cannot used position fixed in the second column. P.S : please no lectures on using div's for making a table structure. I have my own reasons. and its kinda urgent. Thanks in advance.

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 4

    - by nikolaosk
    ?his is the fourth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line.There are 3 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.  In this new post we will build on the previous posts and we will demonstrate how to display the posters per category.We will add a ListView control on the PosterList.aspx and will bind data from the database. We will use the various templates.Then we will write code in the PosterList.aspx.cs to fetch data from the database.1) Launch Visual Studio and open your solution where your project lives2) Open the PosterList.aspx page. We will add some markup in this page. Have a look at the code below  <section class="posters-featured">                    <ul>                         <asp:ListView ID="posterList" runat="server"                            DataKeyNames="PosterID"                            GroupItemCount="3" ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters">                            <EmptyDataTemplate>                                      <table id="Table1" runat="server">                                            <tr>                                                  <td>We have no data.</td>                                            </tr>                                     </table>                              </EmptyDataTemplate>                              <EmptyItemTemplate>                                     <td id="Td1" runat="server" />                              </EmptyItemTemplate>                              <GroupTemplate>                                    <tr ID="itemPlaceholderContainer" runat="server">                                          <td ID="itemPlaceholder" runat="server"></td>                                    </tr>                              </GroupTemplate>                              <ItemTemplate>                                    <td id="Td2" runat="server">                                          <table>                                                <tr>                                                      <td>&nbsp;</td>                                                      <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <span class="PosterName">                                                        <%#:Item.PosterName%>                                                    </span>                                                </a>                                                            <br />                                                <span class="PosterPrice">                                                               <b>Price: </b><%#:String.Format("{0:c}", Item.PosterPrice)%>                                                </span>                                                <br />                                                        </td>                                                </tr>                                          </table>                                    </td>                              </ItemTemplate>                              <LayoutTemplate>                                    <table id="Table2" runat="server">                                          <tr id="Tr1" runat="server">                                                <td id="Td3" runat="server">                                                      <table ID="groupPlaceholderContainer" runat="server">                                                            <tr ID="groupPlaceholder" runat="server"></tr>                                                      </table>                                                </td>                                          </tr>                                          <tr id="Tr2" runat="server"><td id="Td4" runat="server"></td></tr>                                    </table>                              </LayoutTemplate>                        </asp:ListView>                    </ul>               </section>  3) We have a ListView control on the page called PosterList. I set the ItemType property to the Poster class and then the SelectMethod to the GetPosters method.  I will create this method later on.   (ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters")Then in the code below  I have the data-binding expression Item  available and the control becomes strongly typed.So when the user clicks on the link of the poster's category the relevant information will be displayed (photo,name and price)                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>4)  Now we need to write the simple method to populate the ListView control.It is called GetPosters method.The code follows   public IQueryable<Poster> GetPosters([QueryString("id")] int? PosterCatID)        {            PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (PosterCatID.HasValue && PosterCatID > 0)            {                query = query.Where(p=>p.PosterCategoryID==PosterCatID);            }            return query;                    } This is a very simple method that returns information about posters related to the PosterCatID passed to it.I bind the value from the query string to the PosterCatID parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable id.5) I run my application and then click on the "Midfilders" link. Have a look at the picture below to see the results.  In the Site.css file I added some new CSS rules to make everything more presentable. .posters-featured {    width:840px;    background-color:#efefef;}.posters-featured   a:link, a:visited,    a:active, a:hover {        color: #000033;    }.posters-featured    a:hover {        background-color: #85c465;    }  6) I run the application again and this time I do not choose any category, I simply navigate to the PosterList.aspx page. I see all the posters since no query string was passed as a parameter.Have a look at the picture below   ?ake sure you place breakpoints in the code so you can see what is really going on.In the next post I will show you how to display poster details.Hope it helps!!!

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >