Search Results

Search found 3677 results on 148 pages for 'concurrent vector'.

Page 134/148 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Calculating collision for a moving circle, without overlapping the boundaries

    - by Robert Vella
    Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't. What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary. This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists. Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?

    Read the article

  • Dynamic loading a class in java with a different package name

    - by C. Ross
    Is it possible to load a class in Java and 'fake' the package name/canonical name of a class? I tried doing this, the obvious way, but I get a "class name doesn't match" message in a ClassDefNotFoundException. The reason I'm doing this is I'm trying to load an API that was written in the default package so that I can use it directly without using reflection. The code will compile against the class in a folder structure representing the package and a package name import. ie: ./com/DefaultPackageClass.class // ... import com.DefaultPackageClass; import java.util.Vector; // ... My current code is as follows: public Class loadClass(String name) throws ClassNotFoundException { if(!CLASS_NAME.equals(name)) return super.loadClass(name); try { URL myUrl = new URL(fileUrl); URLConnection connection = myUrl.openConnection(); InputStream input = connection.getInputStream(); ByteArrayOutputStream buffer = new ByteArrayOutputStream(); int data = input.read(); while(data != -1){ buffer.write(data); data = input.read(); } input.close(); byte[] classData = buffer.toByteArray(); return defineClass(CLASS_NAME, classData, 0, classData.length); } catch (MalformedURLException e) { throw new UndeclaredThrowableException(e); } catch (IOException e) { throw new UndeclaredThrowableException(e); } }

    Read the article

  • C++: Reference and Pointer question (example regarding OpenGL)

    - by Jay
    I would like to load textures, and then have them be used by multiple objects. Would this work? class Sprite { GLuint* mTextures; // do I need this to also be a reference? Sprite( GLuint* textures ) // do I need this to also be a reference? { mTextures = textures; } void Draw( textureNumber ) { glBindTexture( GL_TEXTURE_2D, mTextures[ textureNumber ] ); // drawing code } }; // normally these variables would be inputed, but I did this for simplicity. const int NUMBER_OF_TEXTURES = 40; const int WHICH_TEXTURE = 10; void main() { std::vector<GLuint> the_textures; the_textures.resize( NUMBER_OF_TEXTURES ); glGenTextures( NUMBER_OF_TEXTURES, &the_textures[0] ); // texture loading code Sprite the_sprite( &the_textures[0] ); the_sprite.Draw( WHICH_TEXTURE ); } And is there a different way I should do this, even if it would work? Thanks.

    Read the article

  • Ping broadcast on Win XP SP3

    - by PaulH
    I'm trying to ping the broadcast address 255.255.255.255 on WinXP SP3. If I use the command line, I get host error: C:\>ping 255.255.255.255 Ping request could not find host 255.255.255.255. Please check the name and try again. If I try a C++ program using the iphlpapi, IcmpSendEcho() fails and GetLastError returns 11010 IP_REQ_TIMED_OUT. HANDLE h = ::IcmpCreateFile(); IPAddr broadcast = inet_addr( "255.255.255.255" ); BYTE payload[ 32 ] = { 0 }; IP_OPTION_INFORMATION option = { 255, 0, 0, 0, 0 }; // a buffer with room for 32 replies each containing the full payload std::vector< BYTE > replies( 32 * ( sizeof( ICMP_ECHO_REPLY ) + 32 ) ); DWORD res = ::IcmpSendEcho( h, broadcast, payload, sizeof( payload ), &option, &replies[ 0 ], replies.size(), 1000 ); ::IcmpCloseHandle( h ); I can ping the local broadcast 192.168.0.255 with no problem. What do I need to do to ping the global broadcast? Thanks, PaulH

    Read the article

  • Variable length argument list - How to understand we retrieved the last argument?

    - by hkBattousai
    I have a Polynomial class which holds coefficients of a given polynomial. One of its overloaded constructors is supposed to receive these coefficients via variable argument list. template <class T> Polynomial<T>::Polynomial(T FirstCoefficient, ...) { va_list ArgumentPointer; va_start(ArgumentPointer, FirstCoefficient); T NextCoefficient = FirstCoefficient; std::vector<T> Coefficients; while (true) { Coefficients.push_back(NextCoefficient); NextCoefficient = va_arg(ArgumentPointer, T); if (/* did we retrieve all the arguments */) // How do I implement this? { break; } } m_Coefficients = Coefficients; } I know that we usually pass an extra parameter which tells the recipient method the total number of parameters, or we pass a sentimental ending parameter. But in order to keep thing short and clean, I don't prefer passing an extra parameter. Is there any way of doing this without modifying the method signature in the example?

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

  • How do I overload () operator with two parameters; like (3,5)?

    - by hkBattousai
    I have a mathematical matrix class. It contains a member function which is used to access any element of the class. template >class T> class Matrix { public: // ... void SetElement(T dbElement, uint64_t unRow, uint64_t unCol); // ... }; template <class T> void Matrix<T>::SetElement(T Element, uint64_t unRow, uint64_t unCol) { try { // "TheMatrix" is define as "std::vector<T> TheMatrix" TheMatrix.at(m_unColSize * unRow + unCol) = Element; } catch(std::out_of_range & e) { // Do error handling here } } I'm using this method in my code like this: // create a matrix with 2 rows and 3 columns whose elements are double Matrix<double> matrix(2, 3); // change the value of the element at 1st row and 2nd column to 6.78 matrix.SetElement(6.78, 1, 2); This works well, but I want to use operator overloading to simplify things, like below: Matrix<double> matrix(2, 3); matrix(1, 2) = 6.78; // HOW DO I DO THIS?

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • no instance of overloaded function getline c++

    - by Dave
    I'm a bit confused as to what i have incorrect with my script that is causing this error. I have a function which calls a fill for game settings but it doesn't like my getline. Also i should mention these are the files i have included for it: #include <fstream> #include <cctype> #include <map> #include <iostream> #include <string> #include <algorithm> #include <vector> using namespace std;' This is what i have: std::map<string, string> loadSettings(std::string file){ ifstream file(file); string line; std::map<string, string> config; while(std::getline(file, line)) { int pos = line.find('='); if(pos != string::npos) { string key = line.substr(0, pos); string value = line.substr(pos + 1); config[trim(key)] = trim(value); } } return (config); } The function is called like this from my main.cpp //load settings for game std::map<string, string> config = loadSettings("settings.txt"); //load theme for game std::map<string, string> theme = loadSettings("theme.txt"); Where did i go wrong ? Please help! The error: settings.h(61): error C2784: 'std::basic_istream<_Elem,_Traits> &std::getline(std::basic_istream<_Elem,_Traits> &&,std::basic_string<_Elem,_Traits,_Alloc> &)' : could not deduce template argument for 'std::basic_istream<_Elem,_Traits> &&' from 'std::string'

    Read the article

  • Rendering maps from raw SVG data in Java

    - by Lunikon
    In an application of mine I have to display locations and great circle paths in a map which is rendered to PNG and then displayed on the web. For this I simply use a world map (NASA's Blue Marbel in fact) scaled to various "zoom levels" as base image and only display the a part of it matching the final image size and fitting all items to be displayed. Straight forward so far. Now I came across Wikipedia's awesome blank SVG maps which contain all the country codes for easy reference and I was wondering whether it was possible to use those to have more customized colors and to highliht countries etc. So I did a bit of googling and was looking for Java libraries which would enable me to load the blank SVG map to memory allows for easy reference/selection of certain paths do manipulations of coloring, stroke widths etc render to a buffered image as the background for the great-circle paths/nodes What I came across quite often was Batik, but it looks like a really heavy framework and I'm not quite sure whether it is what I'm looking for. I have recently played around with Raphaël a bit and I like the way it handles working with vector graphics in code. If the Java framework for my purpose would feature a similar interface, that would be a nice-to-have. Any recommendations what toolset would be approriate for my purposes?

    Read the article

  • How to Refresh / Reload a KML layer in OpenLayers. Dynamic KML Layer.

    - by Ozaki
    TLDR See my answer below on how to refresh the layer. So far I have tried action function as follows: function RefreshKMLData(layer) { layer.loaded = false; layer.setVisibility(true); layer.redraw({ force: true }); } set interval of the function: window.setInterval(RefreshKMLData, 5000, KMLLAYER); the layer itself: var KMLLAYER = new OpenLayers.Layer.Vector("MYKMLLAYER", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: MYKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }) }); the url for KMLLAYER with Math random so it doesnt cache: var MYKMLURL = var currentanchorpositionurl = 'http://' + host + '/data?_salt=' + Math.random(); I would have thought that this would Refresh the layer. As by setting its loaded to false unloads it. Visibility to true reloads it and with the Math random shouldn't allow it to cache? So has anyone done this before or know how I can get this to work? TLDR See my answer below on how to refresh the layer.

    Read the article

  • How does does ifstream eof() work?

    - by Chan
    Hello everyone, #include <vector> #include <list> #include <map> #include <set> #include <deque> #include <stack> #include <bitset> #include <algorithm> #include <functional> #include <numeric> #include <utility> #include <sstream> #include <iostream> #include <iomanip> #include <cstdio> #include <cmath> #include <cstdlib> #include <ctime> #include <cctype> #include <fstream> using namespace std; int main() { fstream inf( "ex.txt", ios::in ); while( !inf.eof() ) { std::cout << inf.get() << "\n"; } inf.close(); inf.clear(); inf.open( "ex.txt", ios::in ); char c; while( inf >> c ) { std::cout << c << "\n"; } return 0; } I'm really confused about eof() function. Suppose my ex.txt's content is: abc It always reads an extra character and show -1. when reading using eof()? But the inf c gave the correct output which was 'abc'? Can anyone help me explain this? Best regards, Chan Nguyen

    Read the article

  • How to write an R function that evaluates an expression within a data-frame

    - by Prasad Chalasani
    Puzzle for the R cognoscenti: Say we have a data-frame: df <- data.frame( a = 1:5, b = 1:5 ) I know we can do things like with(df, a) to get a vector of results. But how do I write a function that takes an expression (such as a or a > 3) and does the same thing inside. I.e. I want to write a function fn that takes a data-frame and an expression as arguments and returns the result of evaluating the expression "within" the data-frame as an environment. Never mind that this sounds contrived (I could just use with as above), but this is just a simplified version of a more complex function I am writing. I tried several variants ( using eval, with, envir, substitute, local, etc) but none of them work. For example if I define fn like so: fn <- function(dat, expr) { eval(expr, envir = dat) } I get this error: > fn( df, a ) Error in eval(expr, envir = dat) : object 'a' not found Clearly I am missing something subtle about environments and evaluation. Is there a way to define such a function?

    Read the article

  • Solving quadratic programming using R

    - by user702846
    I would like to solve the following quadratic programming equation using ipop function from kernlab : min 0.5*x'*H*x + f'*x subject to: A*x <= b Aeq*x = beq LB <= x <= UB where in our example H 3x3 matrix, f is 3x1, A is 2x3, b is 2x1, LB and UB are both 3x1. edit 1 My R code is : library(kernlab) H <- rbind(c(1,0,0),c(0,1,0),c(0,0,1)) f = rbind(0,0,0) A = rbind(c(1,1,1), c(-1,-1,-1)) b = rbind(4.26, -1.73) LB = rbind(0,0,0) UB = rbind(100,100,100) > ipop(f,H,A,b,LB,UB,0) Error in crossprod(r, q) : non-conformable arguments I know from matlab that is something like this : H = eye(3); f = [0,0,0]; nsamples=3; eps = (sqrt(nsamples)-1)/sqrt(nsamples); A=ones(1,nsamples); A(2,:)=-ones(1,nsamples); b=[nsamples*(eps+1); nsamples*(eps-1)]; Aeq = []; beq = []; LB = zeros(nsamples,1); UB = ones(nsamples,1).*1000; [beta,FVAL,EXITFLAG] = quadprog(H,f,A,b,Aeq,beq,LB,UB); and the answer is a vector of 3x1 equals to [0.57,0.57,0.57]; However when I try it on R, using ipop function from kernlab library ipop(f,H,A,b,LB,UB,0)) and I am facing Error in crossprod(r, q) : non-conformable arguments I appreciate any comment

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Using boost::iterator_adaptor

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: class iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::?????? // CategoryOrTraversal > class sparse_iterator : public boost::iterator_adaptor< iterator // Derived , value_type* // Base , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal >

    Read the article

  • C++: inheritance problem

    - by Helltone
    It's quite hard to explain what I'm trying to do, I'll try: Imagine a base class A which contains some variables, and a set of classes deriving from A which all implement some method bool test() that operates on the variables inherited from A. class A { protected: int somevar; // ... }; class B : public A { public: bool test() { return (somevar == 42); } }; class C : public A { public: bool test() { return (somevar > 23); } }; // ... more classes deriving from A Now I have an instance of class A and I have set the value of somevar. int main(int, char* []) { A a; a.somevar = 42; Now, I need some kind of container that allows me to iterate over the elements i of this container, calling i::test() in the context of a... that is: std::vector<...> vec; // push B and C into vec, this is pseudo-code vec.push_back(&B); vec.push_back(&C); bool ret = true; for(i = vec.begin(); i != vec.end(); ++i) { // call B::test(), C::test(), setting *this to a ret &= ( a .* (&(*i)::test) )(); } return ret; } How can I do this? I've tried two methods: forcing a cast from B::* to A::*, adapting a pointer to call a method of a type on an object of a different type (works, but seems to be bad); using std::bind + the solution above, ugly hack; changing the signature of bool test() so that it takes an argument of type const A& instead of inheriting from A, I don't really like this solution because somevar must be public.

    Read the article

  • Get type of the parameter from list of objects, templates, C++

    - by CrocodileDundee
    This question follows to my previous question Get type of the parameter, templates, C++ There is the following data structure: Object1.h template <class T> class Object1 { private: T a1; T a2; public: T getA1() {return a1;} typedef T type; }; Object2.h template <class T> class Object2: public Object1 <T> { private: T b1; T b2; public: T getB1() {return b1;} } List.h template <typename Item> struct TList { typedef std::vector <Item> Type; }; template <typename Item> class List { private: typename TList <Item>::Type items; }; Is there any way how to get type T of an object from the list of objects (i.e. Object is not a direct parameter of the function but a template parameter)? template <class Object> void process (List <Object> *objects) { typename Object::type a1 = objects[0].getA1(); // g++ error: 'Object1<double>*' is not a class, struct, or union type } But his construction works (i.e. Object represents a parameter of the function) template <class Object> void process (Object *o1) { typename Object::type a1 = o1.getA1(); // OK }

    Read the article

  • Where to place interactive objects in JavaScript?

    - by Chris
    I'm creating a web based application (i.e. JavaScript with jQuery and lots of SVG) where the user interacts with "objects" on the screen (think of DIVs that can be draged around, resized and connected by arraows - like a vector drawing programm or a graphical programming language). As each "object" contains individual information but is allways belonging to a "class" of elements it's obvious that this application should be programmed by using an OOP approach. But where do I store the "objects" best? Should I create a global structure ("registry") with all (JS native) objects and tell them "draw yourself on the DOM"? Or should I avoid such a structure and think of the (relevant) DOM nodes as my objects and attach the relevant data as .data() to them? The first approach is very MVC - but I guess the attachment of all the event handlers will be non trivial. The second approach will handle the events in a trivial way and it doesn't create a duplicate structure, but I guess the usual OO stuff (like methods) will be more complex. What do you recomend? I think the answer will be JavaScript and SVG specific as "usual" programming languages don't have such a highly organized output "canvas".

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Triangulation & Direct linear transform

    - by srand
    Following Hartley/Zisserman's Multiview Geometery, Algorithm 12: The optimal triangulation method (p318), I got the corresponding image points xhat1 and xhat2 (step 10). In step 11, one needs to compute the 3D point Xhat. One such method is Direct Linear Transform (DLT), mentioned in 12.2 (p312) and 4.1 (p88). The homogenous method (DLT), p312-313, states that it finds a solution as the unit singular vector corresponding to the smallest singular value of A, thus, A = [xhat1(1) * P1(3,:)' - P1(1,:)' ; xhat1(2) * P1(3,:)' - P1(2,:)' ; xhat2(1) * P2(3,:)' - P2(1,:)' ; xhat2(2) * P2(3,:)' - P2(2,:)' ]; [Ua Ea Va] = svd(A); Xhat = Va(:,end); plot3(Xhat(1),Xhat(2),Xhat(3), 'r.'); However, A is a 16x1 matrix, resulting in a Va that is 1x1. What am I doing wrong (and a fix) in getting the 3D point? For what its worth sample data: xhat1 = 1.0e+009 * 4.9973 -0.2024 0.0027 xhat2 = 1.0e+011 * 2.0729 2.6624 0.0098 P1 = 699.6674 0 392.1170 0 0 701.6136 304.0275 0 0 0 1.0000 0 P2 = 1.0e+003 * -0.7845 0.0508 -0.1592 1.8619 -0.1379 0.7338 0.1649 0.6825 -0.0006 0.0001 0.0008 0.0010 A = <- my computation 1.0e+011 * -0.0000 0 0.0500 0 0 -0.0000 -0.0020 0 -1.3369 0.2563 1.5634 2.0729 -1.7170 0.3292 2.0079 2.6624

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >