Search Results

Search found 5861 results on 235 pages for 'rich pixel vector'.

Page 210/235 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Is Domain Anaemia appropriate in a Service Oriented Architecture?

    - by Stimul8d
    I want to be clear on this. When I say domain anaemia, I mean intentional domain anaemia, not accidental. In a world where most of our business logic is hidden away behind a bunch of services, is a full domain model really necessary? This is the question I've had to ask myself recently since working on a project where the "domain" model is in reality a persistence model; none of the domain objects contain any methods and this is a very intentional decision. Initially, I shuddered when I saw a library full of what are essentially type-safe data containers but after some thought it struck me that this particular system doesn't do much but basic CRUD operations, so maybe in this case this is a good choice. My problem I guess is that my experience so far has been very much focussed on a rich domain model so it threw me a little. The remainder of the domain logic is hidden away in a group of helpers, facades and factories which live in a separate assembly. I'm keen to hear what people's thoughts are on this. Obviously, the considerations for reuse of these classes are much simpler but is really that great a benefit?

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Where to place interactive objects in JavaScript?

    - by Chris
    I'm creating a web based application (i.e. JavaScript with jQuery and lots of SVG) where the user interacts with "objects" on the screen (think of DIVs that can be draged around, resized and connected by arraows - like a vector drawing programm or a graphical programming language). As each "object" contains individual information but is allways belonging to a "class" of elements it's obvious that this application should be programmed by using an OOP approach. But where do I store the "objects" best? Should I create a global structure ("registry") with all (JS native) objects and tell them "draw yourself on the DOM"? Or should I avoid such a structure and think of the (relevant) DOM nodes as my objects and attach the relevant data as .data() to them? The first approach is very MVC - but I guess the attachment of all the event handlers will be non trivial. The second approach will handle the events in a trivial way and it doesn't create a duplicate structure, but I guess the usual OO stuff (like methods) will be more complex. What do you recomend? I think the answer will be JavaScript and SVG specific as "usual" programming languages don't have such a highly organized output "canvas".

    Read the article

  • Using boost::iterator_adaptor

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: class iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::?????? // CategoryOrTraversal > class sparse_iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal >

    Read the article

  • Triangulation & Direct linear transform

    - by srand
    Following Hartley/Zisserman's Multiview Geometery, Algorithm 12: The optimal triangulation method (p318), I got the corresponding image points xhat1 and xhat2 (step 10). In step 11, one needs to compute the 3D point Xhat. One such method is Direct Linear Transform (DLT), mentioned in 12.2 (p312) and 4.1 (p88). The homogenous method (DLT), p312-313, states that it finds a solution as the unit singular vector corresponding to the smallest singular value of A, thus, A = [xhat1(1) * P1(3,:)' - P1(1,:)' ; xhat1(2) * P1(3,:)' - P1(2,:)' ; xhat2(1) * P2(3,:)' - P2(1,:)' ; xhat2(2) * P2(3,:)' - P2(2,:)' ]; [Ua Ea Va] = svd(A); Xhat = Va(:,end); plot3(Xhat(1),Xhat(2),Xhat(3), 'r.'); However, A is a 16x1 matrix, resulting in a Va that is 1x1. What am I doing wrong (and a fix) in getting the 3D point? For what its worth sample data: xhat1 = 1.0e+009 * 4.9973 -0.2024 0.0027 xhat2 = 1.0e+011 * 2.0729 2.6624 0.0098 P1 = 699.6674 0 392.1170 0 0 701.6136 304.0275 0 0 0 1.0000 0 P2 = 1.0e+003 * -0.7845 0.0508 -0.1592 1.8619 -0.1379 0.7338 0.1649 0.6825 -0.0006 0.0001 0.0008 0.0010 A = <- my computation 1.0e+011 * -0.0000 0 0.0500 0 0 -0.0000 -0.0020 0 -1.3369 0.2563 1.5634 2.0729 -1.7170 0.3292 2.0079 2.6624

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Use Linq to SQL to generate sales report

    - by Richard Reddy
    I currently have the following code to generate a sales report over the last 30 days. I'd like to know if it would be possible to use linq to generate this report in one step instead of the rather basic loop I have here. For my requirement, every day needs to return a value to me so if there are no sales for any day then a 0 is returned. Any of the Sum linq examples out there don't explain how it would be possible to include a where filter so I am confused on how to get the total amount per day, or a 0 if no sales, for the last days I pass through. Thanks for your help, Rich //setup date ranges to use DateTime startDate = DateTime.Now.AddDays(-29); DateTime endDate = DateTime.Now.AddDays(1); TimeSpan startTS = new TimeSpan(0, 0, 0); TimeSpan endTS = new TimeSpan(23, 59, 59); using (var dc = new DataContext()) { //get database sales from 29 days ago at midnight to the end of today var salesForDay = dc.Orders.Where(b => b.OrderDateTime > Convert.ToDateTime(startDate.Date + startTS) && b.OrderDateTime <= Convert.ToDateTime(endDate.Date + endTS)); //loop through each day and sum up the total orders, if none then set to 0 while (startDate != endDate) { decimal totalSales = 0m; DateTime startDay = startDate.Date + startTS; DateTime endDay = startDate.Date + endTS; foreach (var sale in salesForDay.Where(b => b.OrderDateTime > startDay && b.OrderDateTime <= endDay)) { totalSales += (decimal)sale.OrderPrice; } Response.Write("From Date: " + startDay + " - To Date: " + endDay + ". Sales: " + String.Format("{0:0.00}", totalSales) + "<br>"); //move to next day startDate = startDate.AddDays(1); } }

    Read the article

  • find consecutive nonzero values

    - by thymeandspace
    I am trying to write a simple MATLAB program that will find the first chain (more than 70) of consecutive nonzero values and return the starting value of that consecutive chain. I am working with movement data from a joystick and there are a few thousand rows of data with a mix of zeros and nonzero values before the actual trial begins (coming from subjects slightly moving the joystick before the trial actually started). I need to get rid of these rows before I can start analyzing the movement from the trials. I am sure this is a relatively simple thing to do so I was hoping someone could offer insight. Thank you in advance -Lilly EDIT: Here's what I tried: s = zeros(size(x1)); for i=2:length(x1) if(x1(i-1) ~= 0) s(i) = 1 + s(i-1); end end display(S); for a vector x1 which has a max chain of 72 but I dont know how to find the max chain and return its first value, so I know where to trim. I also really don't think this is the best strategy, since the max chain in my data will be tens of thousands of values. Thanks for helping me edit, Steve. :)

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • How to check if a position inside a std string exists ?? (c++)

    - by yox
    Hello, i have a long string variable and i want to search in it for specific words and limit text according to thoses words. Say i have the following text : "This amazing new wearable audio solution features a working speaker embedded into the front of the shirt and can play music or sound effects appropriate for any situation. It's just like starring in your own movie" and the words : "solution" , "movie". I want to substract from the big string (like google in results page): "...new wearable audio solution features a working speaker embedded..." and "...just like starring in your own movie" for that i'm using the code : for (std::vector<string>::iterator it = words.begin(); it != words.end(); ++it) { int loc1 = (int)desc.find( *it, 0 ); if( loc1 != string::npos ) { while(desc.at(loc1-i) && i<=80){ i++; from=loc1-i; if(i==80) fromdots=true; } i=0; while(desc.at(loc1+(int)(*it).size()+i) && i<=80){ i++; to=loc1+(int)(*it).size()+i; if(i==80) todots=true; } for(int i=from;i<=to;i++){ if(fromdots) mini+="..."; mini+=desc.at(i); if(todots) mini+="..."; } } but desc.at(loc1-i) causes OutOfRange exception... I don't know how to check if that position exists without causing an exception ! Help please!

    Read the article

  • Custom bean instantiation logic in Spring MVC

    - by Michal Bachman
    I have a Spring MVC application trying to use a rich domain model, with the following mapping in the Controller class: @RequestMapping(value = "/entity", method = RequestMethod.POST) public String create(@Valid Entity entity, BindingResult result, ModelMap modelMap) { if (entity== null) throw new IllegalArgumentException("An entity is required"); if (result.hasErrors()) { modelMap.addAttribute("entity", entity); return "entity/create"; } entity.persist(); return "redirect:/entity/" + entity.getId(); } Before this method gets executed, Spring uses BeanUtils to instantiate a new Entity and populate its fields. It uses this: ... ReflectionUtils.makeAccessible(ctor); return ctor.newInstance(args); Here's the problem: My entities are Spring managed beans. The reason for this is to inject DAOs on them. Instead of calling new, I use EntityFactory.createEntity(). When they're retrieved from the database, I have an interceptor that overrides the public Object instantiate(String entityName, EntityMode entityMode, Serializable id) method and hooks the factories into that. So the last piece of the puzzle missing here is how to force Spring to use the factory rather than its own BeanUtils reflective approach? Any suggestions for a clean solution? Thanks very much in advance.

    Read the article

  • Strange Access Denied warning when running the simplest C++ program.

    - by DaveJohnston
    I am just starting to learn C++ (coming from a Java background) and I have come across something that I can't explain. I am working through the C++ Primer book and doing the exercises. Every time I get to a new exercise I create a new .cpp file and set it up with the main method (and any includes I think I will need) e.g.: #include <list> #include <vector> int main(int argc, char **args) { } and just to make sure I go to the command prompt and compile and run: g++ whatever.cpp a.exe Normally this works just fine and I start working on the exercise, but I just did it and got a strange error. It compiles fine, but when I run it it says Access Denied and AVG pops up telling me that a threat has been detected 'Trojan Horse Generic 17.CKZT'. I tried compiling again using the Microsoft Compiler (cl.exe) and it runs fines. So I went back, and added: #include <iostream> compiled using g++ and ran. This time it worked fine. So can anyone tell me why AVG would report an empty main method as a trojan horse but if the iostream header is included it doesn't?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • C++: Constructor/destructor unresolved when not inline?

    - by Anamon
    In a plugin-based C++ project, I have a TmpClass that is used to exchange data between the main application and the plugins. Therefore the respective TmpClass.h is included in the abstract plugin interface class that is included by the main application project, and implemented by each plugin. As the plugins work on STL vectors of TmpClass instances, there needs to be a default constructor and destructor for the TmpClass. I had declared these in TmpClass.h: class TmpClass { TmpClass(); ~TmpClass(); } and implemented them in TmpClass.cpp. TmpClass::~TmpClass() {} TmpClass::TmpClass() {} However, when compiling plugins this leads to the linker complaining about two unresolved externals - the default constructor and destructor of TmpClass as required by the std::vector<TmpClass> template instantiation - even though all other functions I declare in TmpClass.h and implement in TmpClass.cpp work. As soon as I remove the (empty) default constructor and destructor from the .cpp file and inline them into the class declaration in the .h file, the plugins compile and work. Why is it that the default constructor and destructor have to be inline for this code to compile? Why does it even maatter? (I'm using MSVC++8).

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • How to reliably get size of C-style array?

    - by Frank
    How do I reliably get the size of a C-style array? The method often recommended seems to be to use sizeof, but it doesn't work in the foo function, where x is passed in: #include <iostream> void foo(int x[]) { std::cerr << (sizeof(x) / sizeof(int)); // 2 } int main(){ int x[] = {1,2,3,4,5}; std::cerr << (sizeof(x) / sizeof(int)); // 5 foo(x); return 0; } Answers to this question recommend sizeof but they don't say that it (apparently?) doesn't work if you pass the array around. So, do I have to use a sentinel instead? (I don't think the users of my foo function can always be trusted to put a sentinel at the end. Of course, I could use std::vector, but then I don't get the nice shorthand syntax {1,2,3,4,5}.)

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Automatically Organize Tags in Tax/Folksonomy

    - by Rob Wilkerson
    I'm working on a process that will perform natural language processing (NLP) on one--and potentially several--of our content rich sites. What I'd like to do once the NLP is complete is to automatically organize the output (generally a set of terms that you might think of as tags given the prevalence of that metaphor) into some kind of standard or generally accepted organizational structure. In a perfect world, I'd really like this to be crowd sourced under the folksonomy concept (as opposed to a taxonomy) since the ultimate goal is to target/appeal to real people rather than "domain experts", but I'm open to ideas and best practices. For the obvious purpose of scalability, I'd like to automate the population of this tax/folksonomy so that "some guy" in the team/organization isn't responsible for looking at a bunch of words (with or without context) and arbitrarily fleshing out the contextual components of the tree. I have a few ideas for doing this that require some research to establish viability, but I have exactly zero practical experience with this sort of thing so the ideas really just boil down to stuff I made up that might perform some role in accomplishing the task. Imagining that others have vastly more experience with this sort of thing, I'm hoping that I can stand on your shoulders. Thanks for your thoughts and insights.

    Read the article

  • Is it not possible to make a C++ application "Crash Proof"?

    - by Enno Shioji
    Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data). I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.). When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually? UPDATE By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)

    Read the article

  • is there a Java GUI for dummies ?

    - by MHero
    Hello, first I want to apologize about my english, it may be not clear enough. Well, I'm new to Java programming, and I search over this web about use of Swing, AWT, 2D, etc. but I didn't get the answer I was looking for. I want to know about a book or method to learn Java GUI programming (not even sure this is a propper term). Previous answers guide me to Filthy Rich Clients by Romain Guy and also to The Swing Tutorial in Sun Web Page. and No offense, but...the first one seems too complex and the second one a bit disorganize. so I ask about a more "for dummies" method. Thanks EDIT: Thanks everyboy, you're very kind and serious. I want to clear some things that I didn't state for being my first question. I dont' want to use autogenerated code(Don't want to say why only focus on my question for consistency) Also I've read Deitel & Deitel and it's a good beginners book but it seems to me that doesn't cover layout(and other details) Finally, I tried to read netbeans generated code but it's a mess find method by method and function by function the way that the IDE does it I hope this edition helps to solve my question

    Read the article

  • refactoring my code. My headers (Header Guard Issues)

    - by numerical25
    I had a post similar to this awhile ago based on a error I was getting. I was able to fix it but since then I been having trouble doing things because headers keep blocking other headers from using code. Honestly, these headers are confusing me and if anyone has any resources that will address these types of issues, that will be helpful. What I essentially want to do is be able to have rModel.h be included inside RenderEngine.h. every time I add rModel.h to RenderEngine.h, rModel.h is no longer able to use RenderEngine.h. (rModel.h has a #include of RenderEngine.h as well). So in a nutshell, RenderEngine and rModel need to use each others functionalities. On top of all this confusion, the Main.cpp needs to use RenderEngine. stdafx.h #include "targetver.h" #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files: #include <windows.h> // C RunTime Header Files #include <stdlib.h> #include <malloc.h> #include <memory.h> #include <tchar.h> #include "resource.h" main.cpp #include "stdafx.h" #include "RenderEngine.h" #include "rModel.h" // Global Variables: RenderEngine go; rModel *g_pModel; ...code........... rModel.h #ifndef _MODEL_H #define _MODEL_H #include "stdafx.h" #include <vector> #include <string> #include "rTri.h" #include "RenderEngine.h" ........Code RenderEngine.h #pragma once #include "stdafx.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #include "rModel.h" .......Code......

    Read the article

  • Failed to specialize function template

    - by citizencane
    This is homework, although it's already submitted with a different approach. I'm getting the following from Visual Studio 2008 error C2893: Failed to specialize function template 'void std::sort(_RanIt,_RanIt,_Pr)' The code is as follows main.cpp Database<> db; db.loadDatabase(); db.sortDatabase(sort_by_title()); Database.cpp void Database<C>::sortDatabase(const sort_by &s) { std::sort(db_.begin(), db_.end(), s); } And the function objects are defined as struct sort_by : public std::binary_function<const Media *, const Media *, bool> { virtual bool operator()(const Media *l, const Media *r) const = 0; }; struct sort_by_title : public sort_by { bool operator()(const Media *l, const Media *r) const { ... } }; ... What's the cure here? [Edit] Sorry, maybe I should have made the inheritance clear template <typename C = std::vector<Media *> > class Database : public IDatabase<C> [/Edit]

    Read the article

  • Event Dispatching, void pointer and alternatives

    - by PeeS
    i have my event dispatching / handling functionality working fine, but there is one issue that i need to resolve. Long story short, here are the details. // The event structure struct tEventMessage { // Type of the event int Type; // (void*) Allows those to be casted into per-Type objects void *pArgument1; void *pArgument2; }; I am sending events from different modules in my engine by using the above structure, which requires a pointer to an argument. All messages are queued, and then dispatched on the next ::Tick(). It works fine, untill i try to send something that doesn't exist in next ::Tick, for example: When a mouse click is being handled, it calculates the click coordinates in world space. This is being sent with a pointer to a vector representing that position, but after my program quits that method, this pointer gets invalid obviously, cause local CVector3 is destructed: CVector2 vScreenSpacePosition = vAt; CVector3 v3DPositionA = CVector3(0,0,0); CVector3 v3DPositionB = CVector3(0,0,0); // Screen space to World space calculation for depth zNear v3DPositionA = CMath::UnProject(vScreenSpacePosition, m_vScreenSize, m_Level.GetCurrentCamera()->getViewMatrix(), m_Level.GetCurrentCamera()->getProjectionMatrix(), -1.0 ); // Screen space to World space calculation for depth zFar v3DPositionB = CMath::UnProject(vScreenSpacePosition, m_vScreenSize, m_Level.GetCurrentCamera()->getViewMatrix(), m_Level.GetCurrentCamera()->getProjectionMatrix(), 1.0); // Send zFar position and ScreenSpace position to the handlers // Obviously both vectors won't be valid after this method quits.. CEventDispatcher::Get()->SendEvent(CIEventHandler::EVENT_SYSTEM_FINGER_DOWN, static_cast<void*>(&v3DPositionB), static_cast<void*>(&vScreenSpacePosition)); What i want to ask is, if there is any chance i could make my tEventMessage more 'template', so i can handle sending objects like in the above situation + use what is already implemented? Can't figure it out at the moment.. What else can be done here to allow me to pass some locally available data ? Please can somebody shed a bit of light on this please?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >