Search Results

Search found 3456 results on 139 pages for 'vector art'.

Page 127/139 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How do I overload () operator with two parameters; like (3,5)?

    - by hkBattousai
    I have a mathematical matrix class. It contains a member function which is used to access any element of the class. template >class T> class Matrix { public: // ... void SetElement(T dbElement, uint64_t unRow, uint64_t unCol); // ... }; template <class T> void Matrix<T>::SetElement(T Element, uint64_t unRow, uint64_t unCol) { try { // "TheMatrix" is define as "std::vector<T> TheMatrix" TheMatrix.at(m_unColSize * unRow + unCol) = Element; } catch(std::out_of_range & e) { // Do error handling here } } I'm using this method in my code like this: // create a matrix with 2 rows and 3 columns whose elements are double Matrix<double> matrix(2, 3); // change the value of the element at 1st row and 2nd column to 6.78 matrix.SetElement(6.78, 1, 2); This works well, but I want to use operator overloading to simplify things, like below: Matrix<double> matrix(2, 3); matrix(1, 2) = 6.78; // HOW DO I DO THIS?

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • no instance of overloaded function getline c++

    - by Dave
    I'm a bit confused as to what i have incorrect with my script that is causing this error. I have a function which calls a fill for game settings but it doesn't like my getline. Also i should mention these are the files i have included for it: #include <fstream> #include <cctype> #include <map> #include <iostream> #include <string> #include <algorithm> #include <vector> using namespace std;' This is what i have: std::map<string, string> loadSettings(std::string file){ ifstream file(file); string line; std::map<string, string> config; while(std::getline(file, line)) { int pos = line.find('='); if(pos != string::npos) { string key = line.substr(0, pos); string value = line.substr(pos + 1); config[trim(key)] = trim(value); } } return (config); } The function is called like this from my main.cpp //load settings for game std::map<string, string> config = loadSettings("settings.txt"); //load theme for game std::map<string, string> theme = loadSettings("theme.txt"); Where did i go wrong ? Please help! The error: settings.h(61): error C2784: 'std::basic_istream<_Elem,_Traits> &std::getline(std::basic_istream<_Elem,_Traits> &&,std::basic_string<_Elem,_Traits,_Alloc> &)' : could not deduce template argument for 'std::basic_istream<_Elem,_Traits> &&' from 'std::string'

    Read the article

  • Rendering maps from raw SVG data in Java

    - by Lunikon
    In an application of mine I have to display locations and great circle paths in a map which is rendered to PNG and then displayed on the web. For this I simply use a world map (NASA's Blue Marbel in fact) scaled to various "zoom levels" as base image and only display the a part of it matching the final image size and fitting all items to be displayed. Straight forward so far. Now I came across Wikipedia's awesome blank SVG maps which contain all the country codes for easy reference and I was wondering whether it was possible to use those to have more customized colors and to highliht countries etc. So I did a bit of googling and was looking for Java libraries which would enable me to load the blank SVG map to memory allows for easy reference/selection of certain paths do manipulations of coloring, stroke widths etc render to a buffered image as the background for the great-circle paths/nodes What I came across quite often was Batik, but it looks like a really heavy framework and I'm not quite sure whether it is what I'm looking for. I have recently played around with Raphaël a bit and I like the way it handles working with vector graphics in code. If the Java framework for my purpose would feature a similar interface, that would be a nice-to-have. Any recommendations what toolset would be approriate for my purposes?

    Read the article

  • How to Refresh / Reload a KML layer in OpenLayers. Dynamic KML Layer.

    - by Ozaki
    TLDR See my answer below on how to refresh the layer. So far I have tried action function as follows: function RefreshKMLData(layer) { layer.loaded = false; layer.setVisibility(true); layer.redraw({ force: true }); } set interval of the function: window.setInterval(RefreshKMLData, 5000, KMLLAYER); the layer itself: var KMLLAYER = new OpenLayers.Layer.Vector("MYKMLLAYER", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: MYKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }) }); the url for KMLLAYER with Math random so it doesnt cache: var MYKMLURL = var currentanchorpositionurl = 'http://' + host + '/data?_salt=' + Math.random(); I would have thought that this would Refresh the layer. As by setting its loaded to false unloads it. Visibility to true reloads it and with the Math random shouldn't allow it to cache? So has anyone done this before or know how I can get this to work? TLDR See my answer below on how to refresh the layer.

    Read the article

  • How does does ifstream eof() work?

    - by Chan
    Hello everyone, #include <vector> #include <list> #include <map> #include <set> #include <deque> #include <stack> #include <bitset> #include <algorithm> #include <functional> #include <numeric> #include <utility> #include <sstream> #include <iostream> #include <iomanip> #include <cstdio> #include <cmath> #include <cstdlib> #include <ctime> #include <cctype> #include <fstream> using namespace std; int main() { fstream inf( "ex.txt", ios::in ); while( !inf.eof() ) { std::cout << inf.get() << "\n"; } inf.close(); inf.clear(); inf.open( "ex.txt", ios::in ); char c; while( inf >> c ) { std::cout << c << "\n"; } return 0; } I'm really confused about eof() function. Suppose my ex.txt's content is: abc It always reads an extra character and show -1. when reading using eof()? But the inf c gave the correct output which was 'abc'? Can anyone help me explain this? Best regards, Chan Nguyen

    Read the article

  • Solving quadratic programming using R

    - by user702846
    I would like to solve the following quadratic programming equation using ipop function from kernlab : min 0.5*x'*H*x + f'*x subject to: A*x <= b Aeq*x = beq LB <= x <= UB where in our example H 3x3 matrix, f is 3x1, A is 2x3, b is 2x1, LB and UB are both 3x1. edit 1 My R code is : library(kernlab) H <- rbind(c(1,0,0),c(0,1,0),c(0,0,1)) f = rbind(0,0,0) A = rbind(c(1,1,1), c(-1,-1,-1)) b = rbind(4.26, -1.73) LB = rbind(0,0,0) UB = rbind(100,100,100) > ipop(f,H,A,b,LB,UB,0) Error in crossprod(r, q) : non-conformable arguments I know from matlab that is something like this : H = eye(3); f = [0,0,0]; nsamples=3; eps = (sqrt(nsamples)-1)/sqrt(nsamples); A=ones(1,nsamples); A(2,:)=-ones(1,nsamples); b=[nsamples*(eps+1); nsamples*(eps-1)]; Aeq = []; beq = []; LB = zeros(nsamples,1); UB = ones(nsamples,1).*1000; [beta,FVAL,EXITFLAG] = quadprog(H,f,A,b,Aeq,beq,LB,UB); and the answer is a vector of 3x1 equals to [0.57,0.57,0.57]; However when I try it on R, using ipop function from kernlab library ipop(f,H,A,b,LB,UB,0)) and I am facing Error in crossprod(r, q) : non-conformable arguments I appreciate any comment

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • How to write an R function that evaluates an expression within a data-frame

    - by Prasad Chalasani
    Puzzle for the R cognoscenti: Say we have a data-frame: df <- data.frame( a = 1:5, b = 1:5 ) I know we can do things like with(df, a) to get a vector of results. But how do I write a function that takes an expression (such as a or a > 3) and does the same thing inside. I.e. I want to write a function fn that takes a data-frame and an expression as arguments and returns the result of evaluating the expression "within" the data-frame as an environment. Never mind that this sounds contrived (I could just use with as above), but this is just a simplified version of a more complex function I am writing. I tried several variants ( using eval, with, envir, substitute, local, etc) but none of them work. For example if I define fn like so: fn <- function(dat, expr) { eval(expr, envir = dat) } I get this error: > fn( df, a ) Error in eval(expr, envir = dat) : object 'a' not found Clearly I am missing something subtle about environments and evaluation. Is there a way to define such a function?

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Get type of the parameter from list of objects, templates, C++

    - by CrocodileDundee
    This question follows to my previous question Get type of the parameter, templates, C++ There is the following data structure: Object1.h template <class T> class Object1 { private: T a1; T a2; public: T getA1() {return a1;} typedef T type; }; Object2.h template <class T> class Object2: public Object1 <T> { private: T b1; T b2; public: T getB1() {return b1;} } List.h template <typename Item> struct TList { typedef std::vector <Item> Type; }; template <typename Item> class List { private: typename TList <Item>::Type items; }; Is there any way how to get type T of an object from the list of objects (i.e. Object is not a direct parameter of the function but a template parameter)? template <class Object> void process (List <Object> *objects) { typename Object::type a1 = objects[0].getA1(); // g++ error: 'Object1<double>*' is not a class, struct, or union type } But his construction works (i.e. Object represents a parameter of the function) template <class Object> void process (Object *o1) { typename Object::type a1 = o1.getA1(); // OK }

    Read the article

  • Where to place interactive objects in JavaScript?

    - by Chris
    I'm creating a web based application (i.e. JavaScript with jQuery and lots of SVG) where the user interacts with "objects" on the screen (think of DIVs that can be draged around, resized and connected by arraows - like a vector drawing programm or a graphical programming language). As each "object" contains individual information but is allways belonging to a "class" of elements it's obvious that this application should be programmed by using an OOP approach. But where do I store the "objects" best? Should I create a global structure ("registry") with all (JS native) objects and tell them "draw yourself on the DOM"? Or should I avoid such a structure and think of the (relevant) DOM nodes as my objects and attach the relevant data as .data() to them? The first approach is very MVC - but I guess the attachment of all the event handlers will be non trivial. The second approach will handle the events in a trivial way and it doesn't create a duplicate structure, but I guess the usual OO stuff (like methods) will be more complex. What do you recomend? I think the answer will be JavaScript and SVG specific as "usual" programming languages don't have such a highly organized output "canvas".

    Read the article

  • Using boost::iterator_adaptor

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: class iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::?????? // CategoryOrTraversal > class sparse_iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal >

    Read the article

  • C++: inheritance problem

    - by Helltone
    It's quite hard to explain what I'm trying to do, I'll try: Imagine a base class A which contains some variables, and a set of classes deriving from A which all implement some method bool test() that operates on the variables inherited from A. class A { protected: int somevar; // ... }; class B : public A { public: bool test() { return (somevar == 42); } }; class C : public A { public: bool test() { return (somevar > 23); } }; // ... more classes deriving from A Now I have an instance of class A and I have set the value of somevar. int main(int, char* []) { A a; a.somevar = 42; Now, I need some kind of container that allows me to iterate over the elements i of this container, calling i::test() in the context of a... that is: std::vector<...> vec; // push B and C into vec, this is pseudo-code vec.push_back(&B); vec.push_back(&C); bool ret = true; for(i = vec.begin(); i != vec.end(); ++i) { // call B::test(), C::test(), setting *this to a ret &= ( a .* (&(*i)::test) )(); } return ret; } How can I do this? I've tried two methods: forcing a cast from B::* to A::*, adapting a pointer to call a method of a type on an object of a different type (works, but seems to be bad); using std::bind + the solution above, ugly hack; changing the signature of bool test() so that it takes an argument of type const A& instead of inheriting from A, I don't really like this solution because somevar must be public.

    Read the article

  • Triangulation & Direct linear transform

    - by srand
    Following Hartley/Zisserman's Multiview Geometery, Algorithm 12: The optimal triangulation method (p318), I got the corresponding image points xhat1 and xhat2 (step 10). In step 11, one needs to compute the 3D point Xhat. One such method is Direct Linear Transform (DLT), mentioned in 12.2 (p312) and 4.1 (p88). The homogenous method (DLT), p312-313, states that it finds a solution as the unit singular vector corresponding to the smallest singular value of A, thus, A = [xhat1(1) * P1(3,:)' - P1(1,:)' ; xhat1(2) * P1(3,:)' - P1(2,:)' ; xhat2(1) * P2(3,:)' - P2(1,:)' ; xhat2(2) * P2(3,:)' - P2(2,:)' ]; [Ua Ea Va] = svd(A); Xhat = Va(:,end); plot3(Xhat(1),Xhat(2),Xhat(3), 'r.'); However, A is a 16x1 matrix, resulting in a Va that is 1x1. What am I doing wrong (and a fix) in getting the 3D point? For what its worth sample data: xhat1 = 1.0e+009 * 4.9973 -0.2024 0.0027 xhat2 = 1.0e+011 * 2.0729 2.6624 0.0098 P1 = 699.6674 0 392.1170 0 0 701.6136 304.0275 0 0 0 1.0000 0 P2 = 1.0e+003 * -0.7845 0.0508 -0.1592 1.8619 -0.1379 0.7338 0.1649 0.6825 -0.0006 0.0001 0.0008 0.0010 A = <- my computation 1.0e+011 * -0.0000 0 0.0500 0 0 -0.0000 -0.0020 0 -1.3369 0.2563 1.5634 2.0729 -1.7170 0.3292 2.0079 2.6624

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • find consecutive nonzero values

    - by thymeandspace
    I am trying to write a simple MATLAB program that will find the first chain (more than 70) of consecutive nonzero values and return the starting value of that consecutive chain. I am working with movement data from a joystick and there are a few thousand rows of data with a mix of zeros and nonzero values before the actual trial begins (coming from subjects slightly moving the joystick before the trial actually started). I need to get rid of these rows before I can start analyzing the movement from the trials. I am sure this is a relatively simple thing to do so I was hoping someone could offer insight. Thank you in advance -Lilly EDIT: Here's what I tried: s = zeros(size(x1)); for i=2:length(x1) if(x1(i-1) ~= 0) s(i) = 1 + s(i-1); end end display(S); for a vector x1 which has a max chain of 72 but I dont know how to find the max chain and return its first value, so I know where to trim. I also really don't think this is the best strategy, since the max chain in my data will be tens of thousands of values. Thanks for helping me edit, Steve. :)

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • How to reliably get size of C-style array?

    - by Frank
    How do I reliably get the size of a C-style array? The method often recommended seems to be to use sizeof, but it doesn't work in the foo function, where x is passed in: #include <iostream> void foo(int x[]) { std::cerr << (sizeof(x) / sizeof(int)); // 2 } int main(){ int x[] = {1,2,3,4,5}; std::cerr << (sizeof(x) / sizeof(int)); // 5 foo(x); return 0; } Answers to this question recommend sizeof but they don't say that it (apparently?) doesn't work if you pass the array around. So, do I have to use a sentinel instead? (I don't think the users of my foo function can always be trusted to put a sentinel at the end. Of course, I could use std::vector, but then I don't get the nice shorthand syntax {1,2,3,4,5}.)

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >