Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 241/293 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • jQuery: one-liner for removing all children but one

    - by user1352530
    HTML code is: <li> <a href="#" /> <ul> <li> <a href="#"/> </li> </ul> </li> I want to delete any node under $(this) = li that is not a link. If there are more than one link, just the first one is saved, and if there are links inside the ul tag, they are not included because they are descendants and no childrens. Something like this (I am cloning and outputting html): $(this).clone().remove('not(a:first)').html(); //Assuming remove looks for "a" as a first child Does not work, so I try with this: $(this).clone().children().remove('not(a:first)').parent().html(); Does not work so I try with this: $(this).clone().remove('not(>a:first)').html(); I need the original element as a reference after removal for chaining more operations EDIT: Ok! Second one works with a little syntax correction as some appointed (:not( instead of not() but I would like to know if there is any other solution similar to the third line so I don't have to do children().remove().parent(). Thank you!

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Still Confused About Identifying vs. Non-Identifying Relationships

    - by Jason
    So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as SSN (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance.

    Read the article

  • Providing custom database functionality to custom asp.net membership provider

    - by IrfanRaza
    Hello friends, I am creating custom membership provider for my asp.net application. I have also created a separate class "DBConnect" that provides database functionality such as Executing SQL statement, Executing SPs, Executing SPs or Query and returning SqlDataReader and so on... I have created instance of DBConnect class within Session_Start of Global.asax and stored to a session. Later using a static class I am providing the database functionality throughout the application using the same single session. In short I am providing a single point for all database operations from any asp.net page. I know that i can write my own code to connect/disconnect database and execute SPs within from the methods i need to override. Please look at the code below - public class SGI_MembershipProvider : MembershipProvider { ...... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } ..... //Database connectivity and code execution to change password. } .... } MY PROBLEM - Now what i need is to execute the database part within all these overriden methods from the same database point as described on the top. That is i have to pass the instance of DBConnect existing in the session to this class, so that i can access the methods. Could anyone provide solution on this. There might be some better techniques i am not aware of that. The approach i am using might be wrong. Your suggessions are always welcome. Thanks for sharing your valuable time.

    Read the article

  • Rails paginate existing array of ActiveRecord results

    - by SaoiseK
    Hello, I generally use will_paginate for the pagination in my app, but have hit a stumbler on my search feature. I'm using Thinking Sphinx for doing my full-text search, which returns results paginated. The problem I'm having is that after I've received the results from Thinking Sphinx, I need to merge them with some other results and re-order them. Once I've finished processing them I have an Array of results that is very different from the original from TS. As there could be 1000+ results in this Array Pagination is a necessity. The problem is that I can't figure out how to get will_paginate to play with an existing array. I've done some research and it seems the only solutions to this problem are from several years ago and are based around the old built-in Paginator class. The most recent one I could find that makes use of will_paginate was from devchix from mid-2007: http://www.devchix.com/2007/07/23/will_paginate-array/comment-page-1/ - I've given this a go but it doesn't seem to do anything for me. Are there any current methods for applying pagination (preferably via will_paginate) for existing arrays of AR results?

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • Plist data OK in iPhone simulator but disappears after installation onto device

    - by user183804
    I am testing an application on a first-generation iPod running OS 3.1.3 with a tableview populated from a plist. The application works well on the simulator. Initially after installation onto the iPod for testing, the tableview was blank (scrollable lines present, but empty rows). After searching this site, I found one issue to be a case-sensitive problem in the name of the plist file ("Data.plist in resources; "data.plist" in code). When the problem persisted, I again found on this site the advice to "Clean all Targets" prior to installation. I now have found that after performing a "Clean all Targets" operation twice, quitting XCode in between operations, the problem resolves, and the tableview appears properly populated after installation. I am wondering whether this issue signifies a corrupted plist file, prior to making the application available on the App store. Re-creating the plist file from scratch and then testing it likely would take several hours of work, so I do not want to do so unless absolutely necessary. Thanks in advance for any help.

    Read the article

  • WPF and LINQ/SQL - how and where to keep track of changes?

    - by Groky
    I have a WPF application built using the MVVM pattern: My Models come from LINQ to SQL. I use the Repository Pattern to abstract away the DataContext. My ViewModels have a reference to a Model. Setting a property on the ViewModel causes that value to be written through to the Model. As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext. However, in this question I read: The guidelines from the MSDN documentation on the DataContext class are what I would recommend following: In general, a DataContext instance is designed to last for one "unit of work" however your application defines that term. A DataContext is lightweight and is not expensive to create. A typical LINQ to SQL application creates DataContext instances at method scope or as a member of short-lived classes that represent a logical set of related database operations. How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • What is the fastest way to do division in C for 8bit MCUs?

    - by Jordan S
    I am working on the firmware for a device that uses an 8bit mcu (8051 architecture). I am using SDCC (Small Device C Compiler). I have a function that I use to set the speed of a stepper motor that my circuit is driving. The speed is set by loading a desired value into the reload register for a timer. I have a variable, MotorSpeed that is in the range of 0 to 1200 which represents pulses per second to the motor. My function to convert MotorSpeed to the correct 16bit reload value is shown below. I know that float point operations are pretty slow and I am wondering if there is a faster way of doing this... void SetSpeed() { float t = MotorSpeed; unsigned int j = 0; t = 1/t ; t = t / 0.000001; j = MaxInt - t; TMR3RL = j; // Set reload register for desired freq return; }

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • Generics vs inheritance (when no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • WCF Custom SOAP Header Issues

    - by WayneC
    I'm trying to implement an endpoint behavior which injects a custom SOAP header into all messages to and from a service. I've gotten pretty close by implementing the approach from the accepted answer of this question: http://stackoverflow.com/questions/986455/wcf-wsdl-soap-header-on-all-operations/995951#995951 After implementing that solution, my custom SOAP header does indeed show up in the WSDL; however, when I try to call the methods on my service, I get the following exception/fault: <ExceptionDetail xmlns="http://schemas.datacontract.org/2004/07/System.ServiceModel" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <HelpLink i:nil="true" /> <InnerException i:nil="true" /> <Message>Index was outside the bounds of the array.</Message> <StackTrace> at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.AddHeadersToMessage(Message message, MessageDescription messageDescription, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.OperationFormatter.SerializeReply(MessageVersion messageVersion, Object[] parameters, Object result) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.SerializeOutputs(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)</StackTrace> <Type>System.IndexOutOfRangeException</Type> </ExceptionDetail> Looking in Reflector at the DataContractSerializerOperationFormatter.AddHeadersToMessage method thats throwing the exception, leads me to believe that the following snippet is causing the problem...but I'm not sure why. MessageHeaderDescription description = (MessageHeaderDescription) headerPart.Description; object parameterValue = parameters[description.Index]; I think the last line above is throwing the exception. The parameters variable is from IDispatchFormatter.SerializeReply What's going on?!?!! Any help would be greatly appreciated...

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Recursive Iterators

    - by soandos
    I am having some trouble making an iterator that can traverse the following type of data structure. I have a class called Expression, which has one data member, a List<object>. This list can have any number of children, and some of those children might be other Expression objects. I want to traverse this structure, and print out every non-list object (but I do want to print out the elements of the list of course), but before entering a list, I want to return "begin nest" and after I just exited a list, I want to return "end nest". I was able to do this if I ignored the class wherever possible, and just had List<object> objects with List<object> items if I wanted a subExpression, but I would rather do away with this, and instead have an Expressions as the sublists (it would make it easier to do operations on the object. I am aware that I could use extension methods on the List<object> but it would not be appropriate (who wants an Evaluate method on their list that takes no arguments?). The code that I used to generate the origonal iterator (that works) is: public IEnumerator GetEnumerator(){ return theIterator(expr).GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is List<object>)){ yield return " begin nest "; foreach (var item in (List<object>)root){ foreach (var item2 in theIterator(item)){ yield return item2; } } yield return " end nest "; } else yield return root; } A type swap of List<object> for expression did not work, and lead to a stackOverflow error. How should the iterator be implemented? Update: Here is the swapped code: public IEnumerator GetEnumerator() { return this.GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is Expression)) { yield return " begin nest "; foreach (var item in (Expression)root) { foreach (var item2 in theIterator(item)) yield return item2; } yield return " end nest "; } else yield return root; }

    Read the article

  • .NET 3.5 C# does not offer what I need for locking: Count async saves until 0 again.

    - by Frank Michael Kraft
    I have some records, that I want to save to database asynchronously. I organize them into batches, then send them. As time passes, the batches are processed. In the meanwhile the user can work on. There are some critical operations, that I want to lock him out from, while any save batch is still running asynchronously. The save is done using a TableServiceContext and method .BeginSave() - but I think this should be irrelevant. What I want to do is whenever an async save is started, increase a lock count, and when it completes, decrease the lock count so that it will be zero as soon as all have finished. I want to lock out the critical operation as long as the count is not zero. Furthermore I want to qualify the lock - by business object - for example. I did not find a .NET 3.5 c# locking method, that does fulfil this requirement. A semaphore does not contain a method to check, if the count is 0. Otherwise a semaphore with unlimited max count would do.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >