Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 211/256 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • GROUP BY ID range?

    - by d0ugal
    Given a data set like this; +-----+---------------------+--------+ | id | date | result | +-----+---------------------+--------+ | 121 | 2009-07-11 13:23:24 | -1 | | 122 | 2009-07-11 13:23:24 | -1 | | 123 | 2009-07-11 13:23:24 | -1 | | 124 | 2009-07-11 13:23:24 | -1 | | 125 | 2009-07-11 13:23:24 | -1 | | 126 | 2009-07-11 13:23:24 | -1 | | 127 | 2009-07-11 13:23:24 | -1 | | 128 | 2009-07-11 13:23:24 | -1 | | 129 | 2009-07-11 13:23:24 | -1 | | 130 | 2009-07-11 13:23:24 | -1 | | 131 | 2009-07-11 13:23:24 | -1 | | 132 | 2009-07-11 13:23:24 | -1 | | 133 | 2009-07-11 13:23:24 | -1 | | 134 | 2009-07-11 13:23:24 | -1 | | 135 | 2009-07-11 13:23:24 | -1 | | 136 | 2009-07-11 13:23:24 | -1 | | 137 | 2009-07-11 13:23:24 | -1 | | 138 | 2009-07-11 13:23:24 | 1 | | 139 | 2009-07-11 13:23:24 | 0 | | 140 | 2009-07-11 13:23:24 | -1 | +-----+---------------------+--------+ How would I go about grouping the results by day 5 records at a time. The above results is part of the live data, there is over 100,000 results rows in the table and its growing. Basically I want to measure the change over time, so want to take a SUM of the result every X records. In the real data I'll be doing it ever 100 or 1000 but for the data above perhaps every 5. If i could sort it by date I would do something like this; SELECT DATE_FORMAT(date, '%h%i') ym, COUNT(result) 'Total Games', SUM(result) as 'Score' FROM nn_log GROUP BY ym; I can't figure out a way of doing something similar with numbers. The order is sorted by the date but I hope to split the data up every x results. It's safe to assume there are no blank rows. Doing it above with the data you could do multiple selects like; SELECT SUM(result) FROM table LIMIT 0,5; SELECT SUM(result) FROM table LIMIT 5,5; SELECT SUM(result) FROM table LIMIT 10,5; Thats obviously not a very good way to scale up to a bigger problem. I could just write a loop but I'd like to reduce the number of queries.

    Read the article

  • live.com setting can't be changed

    - by M M
    I'm on mail.live.com where, in the upper left corner it says "Windows Live™" and to the right of that it says "Hotmail([number])" "Messenger" "SkyDrive" "|" "MSN." Directly under the "Windows Live™," there is a square, bluish/gray avatar (with a generic, rotund peop with a head and trunk and arms). To the right of that there is a field (with a subtle, barely perceptible speech bubble-like arrow emanating from the avatar). But there's a word inside that field that I cannot get rid of. Coincidentally I think it's the same word I used as a search term a while back, having meant to put the search term in the "Search email and more" bing field on the other side of the screen. (Even that would have been by mistake because I had been aiming for the e-mail search field.) But it remains in the field connected to the avatar--and moreover the field is editable to the limited extent that a cursor can be placed into the field with a mouse click; the word just can't be deleted. I don't know if the avatar should be there either, but I'd rather have just simply that than the word next to it continuously there for time immemorial. If I click into the field hoping to delete the word, I'm confronted with options along the bottom of the same field (now expanded by my mouse click): "Add: Photo Link Document," and a button that says "Share" and an [X] to reduce the field back to its default state--which still contains the word I'm trying to delete.

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • Find all the child node of specific value in C#

    - by sara brown
    <main> <myself> <pid>1</pid> <name>abc</name> </myself> <myself> <pid>2</pid> <name>efg</name> </myself> </main> that is my XML file named simpan. I have two button. next and previous. What i want to do is, all the info will shows off on the TextBox when the user click the button. The searching node will be based on the pid. Next button will adding 1 value of pid (let's say pid=2) and it will search on the node that have the same value of pid=2. it also will show the name for the pid=2. (showing name=abc) Same goes to the previous button where it will reduce 1value of pid (pid=1). Does anybody knows how to do this? //-------------EDIT------------------ thanks to L.B, im trying to use his code. however i got an error. is my implementation of code correct? private void previousList_Click(object sender, EventArgs e) { pid = 14; XDocument xDoc = XDocument.Parse("C:\\Users\\HDAdmin\\Documents\\Fatty\\SliceEngine\\SliceEngine\\bin\\Debug\\simpan.xml"); var name = xDoc.Descendants("myself") .First(m => (int)m.Element("PatientID") == pid) .Value; textETA.Text = name; //////////////////// }

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Can a Generic Method handle both Reference and Nullable Value types?

    - by Adam Lassek
    I have a series of Extension methods to help with null-checking on IDataRecord objects, which I'm currently implementing like this: public static int? GetNullableInt32(this IDataRecord dr, int ordinal) { int? nullInt = null; return dr.IsDBNull(ordinal) ? nullInt : dr.GetInt32(ordinal); } public static int? GetNullableInt32(this IDataRecord dr, string fieldname) { int ordinal = dr.GetOrdinal(fieldname); return dr.GetNullableInt32(ordinal); } and so on, for each type I need to deal with. I'd like to reimplement these as a generic method, partly to reduce redundancy and partly to learn how to write generic methods in general. I've written this: public static Nullable<T> GetNullable<T>(this IDataRecord dr, int ordinal) { Nullable<T> nullValue = null; return dr.IsDBNull(ordinal) ? nullValue : (Nullable<T>) dr.GetValue(ordinal); } which works as long as T is a value type, but if T is a reference type it won't. This method would need to return either a Nullable type if T is a value type, and default(T) otherwise. How would I implement this behavior?

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

  • Convert image color space and output separate channels in OpenCV

    - by Victor May
    I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this: cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData); cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3); cv::cvtColor(input, output, CV_BGR2YCrCb); cv::Mat outputArr[3]; outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData); outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData); outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData); split(output,outputArr); But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.

    Read the article

  • Using pointers to adjust global objects in objective-c

    - by Rob
    Ok, so I am working with two sets of data that are extremely similar, and at the same time, these data sets are both global NSMutableArrays within the object. data_set_one = [[NSMutableArray alloc] init]; data_set_two = [[NSMutableArray alloc] init]; Two new NSMutableArrays are loaded, which need to be added to the old, existing data. These Arrays are also global. xml_dataset_one = [[NSMutableArray alloc] init]; xml_dataset_two = [[NSMutableArray alloc] init]; To reduce code duplication (and because these data sets are so similar) I wrote a void method within the class to handle the data combination process for both Arrays: -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str Now, I have a decent understanding of object oriented programming, so I was thinking that if I were to invoke the method with the global Arrays in the data like so... [self constructData:data_set_one fromDownloadArray:xml_dataset_one withMatchSelector:@"id"]; Then the global NSMutableArrays (data_set_one) would reflect the changes that happen to "array" within the method. Sadly, this is not the case, data_set_one doesn't reflect the changes (ex: new objects within the Array) outside of the method. Here is a code snippet of the problem // data_set_one is empty // xml_dataset_one has a few objects [constructData:(NSMutableArray *)data_set_one fromDownloadArray:(NSMutableArray *)xml_dataset_one withMatchSelector:(NSString *)@"id"]; // data_set_one should now be xml_dataset_one, but when echoed to screen, it appears to remain empty And here is the gist of the code for the method, any help is appreciated. -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str { if ([data count] == 0) { data = down; // set data equal to downloaded data } else if ([down count] == 0) { // download yields no results, do nothing } else { // combine the two arrays here } } This project is not ARC enabled. Thanks for the help guys! Rob

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • How do I detect proximity of the mouse pointer to a line in Flex?

    - by Hanno Fietz
    I'm working on a charting UI in Flex. One of the features I want to implement is "snapping" of the mousepointer to the data points in the diagram. I. e., if the user hovers the mouse pointer over a line diagram and gets close to the data point, I want the pointer to move to the exact coordinates and show a marker, like this: Currently, the lines are drawn on a Shape, using the Graphics API. The Shape is a child DisplayObject of a custom UIComponent subclass with the exact same dimensions. This means, I already get mouseOver events on the parent of the diagram's canvas. Now I need a way to detect if the pointer is close to one of the data points. I. e. I need an answer to the question "Which data points lie within a radius of x pixels from my current position and which of them is closest?" upon each move of the mouse. I can think of the following possibilities: draw the lines not as simple lines in the graphics API, but as more advanced objects that can have their own mouseOver events. However, I want the snapping to trigger before the mouse is actually over the line. check the original data for possible candidates upon each mouse movement. Using binary search, I might be able to reduce the number of items I have to compare sufficently. prepare some kind of new data structure from the raw data that makes the above search more efficient. I don't know how that would look like. I'm guessing this is a pretty standard problem for a number of applications, but probably the actual code usually is inside of some framework. Is there anything I can read about this topic?

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Creation of Objects: Constructors or Static Factory Methods

    - by Rachel
    I am going through Effective Java and some of my things which I consider as standard are not suggested by the book, for instance creation of object, I was under the impression that constructors are the best way of doing it and books says we should make use of static factory methods, I am not able to few some advantages and so disadvantages and so am asking this question, here are the benefits of using it. Advantages: One advantage of static factory methods is that, unlike constructors, they have names. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they’re invoked. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. I am not able to understand this advantage and would appreciate if someone can explain this point Disadvantages: The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods.I am not getting this point and so would really appreciate some explanation. Reference: Effective Java, Joshua Bloch, Edition 2, pg: 5-10 Also, How to decide to use whether to go for Constructor or Static Factory Method for Object Creation ?

    Read the article

  • How can I eliminate latency in quicktime streamed video

    - by JJFeiler
    I'm prototyping a client that displays streaming video from a HaiVision Barracuda through a quicktime client. I've been unable to reduce the buffer size below 3.0 seconds... for this application, we need as low a latency as the network allows, and prefer video dropouts to delay. I'm doing the following: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { NSString *path = [[NSBundle mainBundle] pathForResource:@"haivision" ofType:@"sdp"]; NSError *error = nil; QTMovie *qtmovie = [QTMovie movieWithFile:path error:&error]; if( error != nil ) { NSLog(@"error: %@", [error localizedDescription]); } Movie movie = [qtmovie quickTimeMovie]; long trackCount = GetMovieTrackCount(movie); Track theTrack = GetMovieTrack(movie,1); Media theMedia = GetTrackMedia(theTrack); MediaHandler theMediaHandler = GetMediaHandler(theMedia); QTSMediaPresentationParams myPres; ComponentResult c = QTSMediaGetIndStreamInfo(theMediaHandler, 1,kQTSMediaPresentationInfo, &myPres); Fixed shortdelay = 1<<15; OSErr theErr = QTSPresSetInfo (myPres.presentationID, kQTSAllStreams, kQTSTargetBufferDurationInfo, &shortdelay ); NSLog(@"OSErr %d", theErr); [movieView setMovie:qtmovie]; [movieView play:self]; } I seem to be getting valid objects/structures all the way down to the QTSPres, though the ComponentResult and OSErr are both returning -50. The streaming video plays fine, but the buffer is still 3.0seconds. Any help/insight appreciated. J

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • How do I add code automatically to a derived function in C++

    - by Ian
    I have code that's meant to manage operations on both a networked client and a server, since there is significant overlap between the two. However, there are a few functions here and there that are meant to be exclusively called by the client or server, and accidentally calling a client function on the server (or vice versa) is a significant source of bugs. To reduce these sorts of programming errors, I'm trying to tag functions so that they'll raise a ruckus if they're misused. My current solution is a simple macro at the start of each function that calls an assert if the client or server accesses members they shouldn't. However, this runs into problems when there are multiple derived instances of classes, in that I have to tag the implementation as client or server side in EVERY child class. What I'd like to be able to do is put a tag in the virtual member's signature in the base class, so that I only have to tag it once and not run into errors by forgetting to do it repeatedly. I've considered putting a check in a base class implementation and then referring to it with something like base::functionName, but that runs into the same issue as far as needing to manually add the function call to every implementation. Ideally, I'd be able to have parent versions of the function called automatically like default constructors do. Does anybody know how to achieve something like this in C++? Is there an alternate approach I should be considering? Thanks!

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Segmentation fault in Qt application framework

    - by yan bellavance
    this generates a segmentation fault becuase of "QColor colorMap[9]";. If I remove colorMap the segmentation fault goes away. If I put it back. It comes back. If I do a clean all then build all, it goes away. If I increase its arraysize it comes back. On the other hand if I reduce it it doesnt come back. I tired adding this array to another project and What could be happening. I am really curious to know. I have removed everything else in that class. This widget subclassed is used to promote a widget in a QMainWindow. class LevelIndicator : public QWidget { public: LevelIndicator(QWidget * parent); void paintEvent(QPaintEvent * event ); float percent; QColor colorMap[9]; int NUM_GRADS; }; the error happens inside ui_mainwindow.h at one of these lines: hpaFwdPwrLvl->setObjectName(QString::fromUtf8("hpaFwdPwrLvl")); verticalLayout->addWidget(hpaFwdPwrLvl); I know i am not providing much but I will give alink to the app. Im trying to see if anyone has a quick answer for this.

    Read the article

  • Pass scalar/list context to called subroutine

    - by Will
    I'm trying to write a sub that takes a coderef parameter. My sub does some initialization, calls the coderef, then does some cleanup. I need to call the coderef using the same context (scalar, list, void context) that my sub was called in. The only way I can think of is something like this: sub perform { my ($self, $code) = @_; # do some initialization... my @ret; my $ret; if (not defined wantarray) { $code->(); } elsif (wantarray) { @ret = $code->(); } else { $ret = $code->(); } # do some cleanup... if (not defined wantarray) { return; } elsif (wantarray) { return @ret; } else { return $ret; } } Obviously there's a good deal of redundancy in this code. Is there any way to reduce or eliminate any of this redundancy? EDIT   I later realized that I need to run $code->() in an eval block so that the cleanup runs even if the code dies. Adding eval support, and combining the suggestions of user502515 and cjm, here's what I've come up with. sub perform { my ($self, $code) = @_; # do some initialization... my $w = wantarray; return sub { my $error = $@; # do some cleanup... die $error if $error; # propagate exception return $w ? @_ : $_[0]; }->(eval { $w ? $code->() : scalar($code->()) }); } This gets rid of the redundancy, though unfortunately now the control flow is a little harder to follow.

    Read the article

  • How to adjust font size with Cufon?

    - by user77413
    I am using a condensed font in Cufon. When the page loads, my menu is too wide and wraps. Then Cufon replaces the font and it looks fine. To reduce the visual distraction, I want to set the font size smaller and then have Cufon change the font size when it displays. Currently the font size is set by the div containing the menu. Here is the CSS for the menu container: .header_menu_block { display: block; margin: 0px 0px 0px 0px; padding: 0px 0px 0px 0px; margin-top: 3px; /*margin-left: 238px; ie 6 can't handle, see margin block below*/ float: left; text-align: left; font-weight: normal; font-size: 14px; color: #FFFFFF; height: 41px; width: 991px; } The Cufon replacement code looks like this: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'} } ); </script> I've tried setting the CSS font size to 12px and then using the following Cufon code, but it does not work: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'}, font-size:'14px' } ); </script> Does anyone know how to do this?

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >