Search Results

Search found 1725 results on 69 pages for 'compute shader'.

Page 34/69 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • How can I acheive a smooth 2D lighting effect?

    - by Cyral
    I'm making a tile based game in XNA. So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter. EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it.

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • Multiply mode in SpriteBatch

    - by ashes999
    I have a "lighting" texture (black background with white or colours for lights) that I want to draw as a multiplcation operation. SpriteBatch.Begin can specify BlendState.Additive, but there's no BlendState.Multiplicative. I also tried the solution in this answer, but it didn't work -- even when I (incorrectly?) changed the code to work with XNA 4 style ColorDestinationBlend, I ended up with the final solution being inverted (black area where the light is, everything else is visible). I initially thought of a shader, but I couldn't get shaders to work with MonoGame, so I'm falling back to SpriteBatch.

    Read the article

  • Huge 2d pixelized world

    - by aspcartman
    I would like to make a game field in a indie-strategic 2d game to be some a-like this popular picture. http://0.static.wix.com/media/6a83ae_cd307e45ffd9c6b145237263ac1a86be.jpg_1024 So every "pixel"(blocks) changes it's color slowly, sometimes a bright color wave happens, etc, but the spaces beetwen this pixels should stay dark (not to count shades, lightning and other 3rd party stuff going on). Units are going to be same "pixelized" and should position them-selfs according those blocks. I have some experience in game-developing, but this task seems not trivial for me. What approaches (shader, tons of sprites or code-render, i don't now) would you recommend me to follow? (I'm thinking of making this game using Unity Engine) Thanks everyone! :)

    Read the article

  • In Java Concurrency In Practice by Brian Goetz, why is the Memoizer class not annotated with @ThreadSafe?

    - by dig_dug
    Java Concurrency In Practice by Brian Goetz provides an example of a efficient scalable cache for concurrent use. The final version of the example showing the implementation for class Memoizer (pg 108) shows such a cache. I am wondering why the class is not annotated with @ThreadSafe? The client, class Factorizer, of the cache is properly annotated with @ThreadSafe. The appendix states that if a class is not annotated with either @ThreadSafe or @Immutable that it should be assumed that it isn't thread safe. Memoizer seems thread-safe though. Here is the code for Memoizer: public class Memoizer<A, V> implements Computable<A, V> { private final ConcurrentMap<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>(); private final Computable<A, V> c; public Memoizer(Computable<A, V> c) { this.c = c; } public V compute(final A arg) throws InterruptedException { while (true) { Future<V> f = cache.get(arg); if (f == null) { Callable<V> eval = new Callable<V>() { public V call() throws InterruptedException { return c.compute(arg); } }; FutureTask<V> ft = new FutureTask<V>(eval); f = cache.putIfAbsent(arg, ft); if (f == null) { f = ft; ft.run(); } } try { return f.get(); } catch (CancellationException e) { cache.remove(arg, f); } catch (ExecutionException e) { throw launderThrowable(e.getCause()); } } } }

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • Recursively adding threads to a Java thread pool

    - by Leith
    I am working on a tutorial for my Java concurrency course. The objective is to use thread pools to compute prime numbers in parallel. The design is based on the Sieve of Eratosthenes. It has an array of n bools, where n is the largest integer you are checking, and each element in the array represents one integer. True is prime, false is non prime, and the array is initially all true. A thread pool is used with a fixed number of threads (we are supposed to experiment with the number of threads in the pool and observe the performance). A thread is given a integer multiple to process. The thread then finds the first true element in the array that is not a multiple of thread's integer. The thread then creates a new thread on the thread pool which is given the found number. After a new thread is formed, the existing thread then continues to set all multiples of it's integer in the array to false. The main program thread starts the first thread with the integer '2', and then waits for all spawned threads to finish. It then spits out the prime numbers and the time taken to compute. The issue I have is that the more threads there are in the thread pool, the slower it takes with 1 thread being the fastest. It should be getting faster not slower! All the stuff on the internet about Java thread pools create n worker threads the main thread then wait for all threads to finish. The method I use is recursive as a worker can spawn more worker threads. I would like to know what is going wrong, and if Java thread pools can be used recursively.

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • Issue with getting 2 chars from string using indexer

    - by Learner
    I am facing an issue in reading char values. See my program below. I want to evaluate an infix expression. As you can see I want to read '10' , '*', '20' and then use them...but if I use string indexer s[0] will be '1' and not '10' and hence I am not able to get the expected result. Can you guys suggest me something? Code is in c# class Program { static void Main(string[] args) { string infix = "10*2+20-20+3"; float result = EvaluateInfix(infix); Console.WriteLine(result); Console.ReadKey(); } public static float EvaluateInfix(string s) { Stack<float> operand = new Stack<float>(); Stack<char> operator1 = new Stack<char>(); int len = s.Length; for (int i = 0; i < len; i++) { if (isOperator(s[i])) // I am having an issue here as s[i] gives each character and I want the number 10 operator1.Push(s[i]); else { operand.Push(s[i]); if (operand.Count == 2) Compute(operand, operator1); } } return operand.Pop(); } public static void Compute(Stack<float> operand, Stack<char> operator1) { float operand1 = operand.Pop(); float operand2 = operand.Pop(); char op = operator1.Pop(); if (op == '+') operand.Push(operand1 + operand2); else if(op=='-') operand.Push(operand1 - operand2); else if(op=='*') operand.Push(operand1 * operand2); else if(op=='/') operand.Push(operand1 / operand2); } public static bool isOperator(char c) { bool result = false; if (c == '+' || c == '-' || c == '*' || c == '/') result = true; return result; } } }

    Read the article

  • malloc works, cudaHostAlloc segfaults?

    - by Mikhail
    I am new to CUDA and I want to use cudaHostAlloc. I was able to isolate my problem to this following code. Using malloc for host allocation works, using cudaHostAlloc results in a segfault, possibly because the area allocated is invalid? When I dump the pointer in both cases it is not null, so cudaHostAlloc returns something... works in_h = (int*) malloc(length*sizeof(int)); //works for (int i = 0;i<length;i++) in_h[i]=2; doesn't work cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); for (int i = 0;i<length;i++) in_h[i]=2; //segfaults Standalone Code #include <stdio.h> void checkDevice() { cudaDeviceProp info; int deviceName; cudaGetDevice(&deviceName); cudaGetDeviceProperties(&info,deviceName); if (!info.deviceOverlap) { printf("Compute device can't use streams and should be discared."); exit(EXIT_FAILURE); } } int main() { checkDevice(); int *in_h; const int length = 10000; cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); printf("segfault comming %d\n",in_h); for (int i = 0;i<length;i++) { in_h[i]=2; } free(in_h); return EXIT_SUCCESS; } ~ Invocation [id129]$ nvcc fun.cu [id129]$ ./a.out segfault comming 327641824 Segmentation fault (core dumped) Details Program is run in interactive mode on a cluster. I was told that an invocation of the program from the compute node pushes it to the cluster. Have not had any trouble with other home made toy cuda codes.

    Read the article

  • OpenGL particle system

    - by allan
    I'm really new with OpenGL, so bear with me. I'm trying to simulate a particle system using OpenGl but I can't get it to work, this is what I have so far: #include <GL/glut.h> int main (int argc, char **argv){ // data allocation, various non opengl stuff ............ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE ); glutInitWindowPosition(100,100); glutInitWindowSize(size, size); glPointSize (4); glutCreateWindow("test gl"); ............ // initial state, not opengl ............ glViewport(0,0,size,size); glutDisplayFunc(display); glutIdleFunc(compute); glutMainLoop(); } void compute (void) { // change state not opengl glutPostRedisplay(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POINTS); for(i = 0; i<nparticles; i++) { // two types of particles if (TYPE(particle[i]) == 1) glColor3f(1,0,0); else glColor3f(0,0,1); glVertex2f(X(particle[i]),Y(particle[i])); } glEnd(); glFlush(); glutSwapBuffers(); } I get a black window after a couple of seconds (the window has just the title bar before that). Where do I go wrong? Any help would be very much appreciated. Thanks. LE: the x and y coordinates of each particle are within the interval (0,size)

    Read the article

  • CMYK + CMYK = ? CMYK / 2 = ?

    - by Pete
    Suppose there are two colors defined in CMYK: color1 = 30, 40, 50, 60 color2 = 50, 60, 70, 80 If they were to be printed what values would the resulting color have? color_new = min(cyan1 + cyan2, 100), min(magenta1 + magenta2, 100), min(yellow1 + yellow2, 100), min(black1 + black2, 100)? Suppose there is a color defined in CMYK: color = 40, 30, 30, 100 It is possible to print a color at partial intensity, i.e. as a tint. What values would have a 50% tint of that color? color_new = cyan / 2, magenta / 2, yellow / 2, black / 2? I'm asking this to better understand the "tintTransform" function in PDF Reference 1.7, 4.5.5 Special Color Spaces, DeviceN Color Spaces Update: To better clarify: I'm not entirely concerned with human perception or how the CMYK dyies react to the paper. If someone specifies 90% tint which, when printed, looks like full intensity colorant, that's ok. In other words, if I asking how to compute 50% of cmyk(40, 30, 30, 100) I'm asking how to compute the new values, regardless of whether the result looks half-dark or not. Update 2: I'm confused now. I checked this in InDesign and Acrobat. For example Pantone 3005 has CMYK 100, 34, 0, 2, and its 25% tint has CMYK 25, 8.5, 0, 0.5. Does it mean I can "monkey around in a linear way"?

    Read the article

  • mvn deploy to AWS (ssh via distributionManagement)

    - by Dexter
    I am working on deploying the WAR file to AWS using Maven. I am planning to use 'mvn deploy' for the same which would ssh the war file to AWS. I am following http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html. This is my POM file <project> ... <distributionManagement> <repository> <id>ssh-aws</id> <url>scpexe://<ec2 instance>.compute-1.amazonaws.com</url> </repository> </distributionManagement> <build> <extensions> <!-- Enabling the use of FTP --> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-beta-6</version> </extension> </extensions> </build> .. </project> This is my settings.xml <server> <id>ssh-aws</id> <username>aws-user</username> </server> The only issue is that I am unable to figure out the url in distributionManagement node of pom.xml. I am able to ssh in the AWS server by the following. ssh -i ~/pemfile/pemfile-key.pem aws-user@<ec2 instance>.compute-1.amazonaws.com But when I run mvn clean deploy, I receive this.. Exit code: 1 - Permission denied (publickey). -> [Help 1] Thanks in advance.

    Read the article

  • Running an RMI registry on one computer and connecting to it from another

    - by Hippopotamus
    Hello. I am doing a student assignment, using java RMI. I've programmed a simple RMI-server application that provides method which return some strings to the client. When I start the server on localhost and connect to it by client on the same computer, everything goes well. However, I am not able to do this between two computers on home network. The computers both have no trouble of connecting by a simple C program with similar functionality, so I guess the problem is with JVM here. I am binding the class to rmiregistry with try{ ComputeImpl R = new ComputeImpl(); Naming.rebind("rmi://localhost/ComputeService",R); } catch(Exception e){ System.out.println("Trouble: " +e); } And I'm doing the lookup for RMI registry in the client application with providing the argument while launching the application: StringBuffer rmi_address = new StringBuffer(); rmi_address.append("rmi://").append(args[1]).append("/ComputeService"); Compute R = (Compute) Naming.lookup(rmi_address.toString()); Is the problem with my code or with JVM? Thanks in advance.

    Read the article

  • 2D platformer gravity physics with slow-motion

    - by DD
    Hi all, I fine tuned my 2d platformer physics and when I added slow-motion I realized that it is messed up. The problem I have is that for some reason the physics still depends on framerate. So when I scale down time elapsed, every force is scaled down as well. So the jump force is scaled down, meaning in slow-motion, character jumps vertically smaller height and gravity force is scaled down as well so the character goes further in the air without falling. I'm sending update function in hopes that someone can help me out here (I separated vertical (jump, gravity) and walking (arbitrary walking direction on a platform - platforms can be of any angle) vectors): characterUpdate:(float)dt { //Compute walking velocity walkingAcceleration = direction of platform * walking acceleration constant * dt; initialWalkingVelocity = walkingVelocity; if( isWalking ) { if( !isJumping ) walkingVelocity = walkingVelocity + walkingAcceleration; else walkingVelocity = walkingVelocity + Vector( walking acceleration constant * dt, 0 ); } // Compute jump/fall velocity if( !isOnPlatform ) { initialVerticalVelocity = verticalVelocity; verticalVelocity = verticalVelocity + verticalAcceleration * dt; } // Add walking velocity position = position + ( walkingVelocity + initialWalkingVelocity ) * 0.5 * dt; //Add jump/fall velocity if not on a platform if( !isOnPlatform ) position = position + ( verticalVelocity + initialVerticalVelocity ) * 0.5 * dt; verticalAcceleration.y = Gravity * dt; }

    Read the article

  • Instrumenting a string

    - by George Polevoy
    Somewhere in C++ era i have crafted a library, which enabled string representation of the computation history. Having a math expression like: TScalar Compute(TScalar a, TScalar b, TScalar c) { return ( a + b ) * c; } I could render it's string representation: r = Compute(VerbalScalar("a", 1), VerbalScalar("b", 2), VerbalScalar("c", 3)); Assert.AreEqual(9, r.Value); Assert.AreEqual("(a+b)*c==(1+2)*3", r.History ); C++ operator overloading allowed for substitution of a simple type with a complex self-tracking entity with an internal tree representation of everything happening with the objects. Now i would like to have the same possibility for NET strings, only instead of variable names i would like to see a stack traces of all the places in code which affected a string. And i want it to work with existing code, and existing compiled assemblies. Also i want all this to hook into visual studio debugger, so i could set a breakpoint, and see everything that happened with a string. Which technology would allow this kind of things? I know it sound like an utopia, but I think visual studio code coverage tools actually do the same kind of job while instrumenting the assemblies.

    Read the article

  • C++ variable to const expression

    - by user1344784
    template <typename Real> class A{ }; template <typename Real> class B{ }; //... a few dozen more similar template classes class Computer{ public slots: void setFrom(int from){ from_ = from; } void setTo(int to){ to_ = to; } private: template <int F, int T> void compute(){ using boost::fusion::vector; using boost::fusion::at_c; vector<A<float>, B<float>, ...> v; at_c<from_>(v).operator()(at_c<to_>(v)); //error; needs to be const-expression. }; This question isn't about Qt, but there is a line of Qt code in my example. The setFrom() and setTo() are functions that are called based on user selection via the GUI widget. The root of my problem is that 'from' and 'to' are variables. In my compute member function I need to pick a type (A, B, etc.) based on the values of 'from' and 'to'. The only way I know how to do what I need to do is to use switch statements, but that's extremely tedious in my case and I would like to avoid. Is there anyway to convert the error line to a constant-expression?

    Read the article

  • Computing, storing, and retrieving values to and from an N-Dimensional matrix

    - by Adam S
    This question is probably quite different from what you are used to reading here - I hope it can provide a fun challenge. Essentially I have an algorithm that uses 5(or more) variables to compute a single value, called outcome. Now I have to implement this algorithm on an embedded device which has no memory limitations, but has very harsh processing constraints. Because of this, I would like to run a calculation engine which computes outcome for, say, 20 different values of each variable and stores this information in a file. You may think of this as a 5(or more)-dimensional matrix or 5(or more)-dimensional array, each dimension being 20 entries long. In any modern language, filling this array is as simple as having 5(or more) nested for loops. The tricky part is that I need to dump these values into a file that can then be placed onto the embedded device so that the device can use it as a lookup table. The questions now, are: What format(s) might be acceptable for storing the data? What programs (MATLAB, C#, etc) might be best suited to compute the data? C# must be used to import the data on the device - is this possible given your answer to #1?

    Read the article

  • How to mock/stub a directory of files and their contents using RSpec?

    - by John Topley
    A while ago I asked "How to test obtaining a list of files within a directory using RSpec?" and although I got a couple of useful answers, I'm still stuck, hence a new question with some more detail about what I'm trying to do. I'm writing my first RubyGem. It has a module that contains a class method that returns an array containing a list of non-hidden files within a specified directory. Like this: files = Foo.bar :directory => './public' The array also contains an element that represents metadata about the files. This is actually a hash of hashes generated from the contents of the files, the idea being that changing even a single file changes the hash. I've written my pending RSpec examples, but I really have no idea how to implement them: it "should compute a hash of the files within the specified directory" it "shouldn't include hidden files or directories within the specified directory" it "should compute a different hash if the content of a file changes" I really don't want to have the tests dependent on real files acting as fixtures. How can I mock or stub the files and their contents? The gem implementation will use Find.find, but as one of the answers to my other question said, I don't need to test the library. I really have no idea how to write these specs, so any help much appreciated!

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >