Search Results

Search found 6411 results on 257 pages for 'binary vector'.

Page 35/257 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Fast language for problem solving? [closed]

    - by Friend of Kim
    I learned PHP to make websites. After some years I've started using programming for solving what is difficult tasks for my level. Now I want to make a program that solves equations. (I want to write it myself, not use an API. Because I'm doing this for the sake of the challenge, not for the result..) Because of this, I'm going to learn a new and faster/better language. It's going to be C++, Java, Python or C#. What are the benefits of each language, and which language is best for speed compared to speed of writing and readability? Using C would be lightning fast, but the lack of OO is making for more complex code and reduces the readability, for example..

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Casting Type array to Generic array?

    - by George R
    The short version of the question - why can't I do this? I'm restricted to .NET 3.5. T[] genericArray; // Obviously T should be float! genericArray = new T[3]{ 1.0f, 2.0f, 0.0f }; // Can't do this either, why the hell not genericArray = new float[3]{ 1.0f, 2.0f, 0.0f }; Longer version - I'm working with the Unity engine here, although that's not important. What is - I'm trying to throw conversion between its fixed Vector2 (2 floats) and Vector3 (3 floats) and my generic Vector< class. I can't cast types directly to a generic array. using UnityEngine; public struct Vector { private readonly T[] _axes; #region Constructors public Vector(int axisCount) { this._axes = new T[axisCount]; } public Vector(T x, T y) { this._axes = new T[2] { x, y }; } public Vector(T x, T y, T z) { this._axes = new T[3]{x, y, z}; } public Vector(Vector2 vector2) { // This doesn't work this._axes = new T[2] { vector2.x, vector2.y }; } public Vector(Vector3 vector3) { // Nor does this this._axes = new T[3] { vector3.x, vector3.y, vector3.z }; } #endregion #region Properties public T this[int i] { get { return _axes[i]; } set { _axes[i] = value; } } public T X { get { return _axes[0];} set { _axes[0] = value; } } public T Y { get { return _axes[1]; } set { _axes[1] = value; } } public T Z { get { return this._axes.Length (Vector2 vector2) { Vector vector = new Vector(vector2); return vector; } public static explicit operator Vector(Vector3 vector3) { Vector vector = new Vector(vector3); return vector; } #endregion }

    Read the article

  • What git binary I am using?

    - by Kuroki Kaze
    I just installed git 1.6.0 from source, but strange thing now happening to me: debian:~/git# git version git version 1.5.6.5 debian:~/git# which git /usr/local/bin/git debian:~/git# /usr/local/bin/git version git version 1.6.0 How can I make 1.6.0 binary default? System is Debian Lenny. Git installed with simple ./configure && make && make all.

    Read the article

  • Using logarithms to normalize a vector to avoid overflow

    - by muscicapa
    http://stackoverflow.com/questions/2293762/problem-with-arithmetic-using-logarithms-to-avoid-numerical-underflow-take-2 Having seen the above and having seen softmax normalization I was trying to normalize a vector while avoiding overflow - that is (x1 x2 x3 x4 ... xn) the normalized form for me has the sum of squares as 1.0 So what I thought of doing is s=(2*log(x1)+2*log(x2)+...+2*log(xn))/2 so the two factor can be taken off and finally the normalized vector is exp(log(x1)-s), , ..., exp(log(xn)-s) but I am evidently doing something wrong here, what?

    Read the article

  • Numerical Integration of a vector in matlab?

    - by clowny
    Hello all! Well i have an issue. I have a vector of 358 numbers. I'd like to make a numerical integration of this vector, but i don't know the function of this one. I found that we can use trapz or quad, but i don't really understand how to integrate without the function. Thanx for any help!

    Read the article

  • Initialization vector - DES/triple-des algorithm

    - by user312373
    Hi, In order to generate the encrypted data we would need to define a Key that should suffice generating the data. But in .net DESCryptoServiceProvider requires Key and Intialisation Vector to generate the encrypted data. In this regard, I would like to know the importance & the benefit gained by defining this initialisation vector field. Is this mandatory while encryption using DES algorithm. Pls share your thoughts on the same. Regards, Balu

    Read the article

  • Checking for duplicates in a vector

    - by xbonez
    I have to check a vector for duplicates. What is the best way to approach this: I take the first element, compare it against all other elements in the vector. Then take the next element and do the same and so on. Is this the best way to do it, or is there a more efficient way to check for dups?

    Read the article

  • Save all file names in a directory to a vector

    - by Bobby
    I need to save all ".xml" file names in a directory to a vector. To make a long story short, I cannot use the dirent API. It seems as if C++ does not have any concept of "directories". Once I have the filenames in a vector, I can iterate through and "fopen" these files. Is there an easy way to get these filenames at runtime?

    Read the article

  • Converting a ts object to a plain old vector in R

    - by taskforce145
    I need to use a function on a vector that does not take a ts object. I'm trying to convert it to a plain old vector but I just can't seem to figure it out. I googled around but mostly people are trying to convert data types into ts object. I want to go the other way. Any help would be appreciated.

    Read the article

  • Sort std::vector by an element inside?

    - by user146780
    I currently have a std::vector which holds std::vector of double. I'd want to sort it by the second element of the double vectore. ex: instead of sorting by MyVec[0] or myvec[1] I wat it to sort myVec[0] and myvec[1] based on myvec[0][1] myvec[1][1]. Basically sort by a contained value, not the objects in it. so if myvec[0][1] is less than myvec[1][1] then myvec[0] and myvec[1] will swap. Thanks

    Read the article

  • C++ int vector to c#

    - by Stefan Koenen
    I'm doing a C# project and I want to call next_permutation from the algorithm library in C++. I found the way to call c++ functions in c# but i dont know how to get vectors from c++ and use it in c# (cause next_permutation require a int vector...) this is what I'm trying at the moment: extern void NextPermutation(vector<int>& permutation) { next_permutation (permutation.begin(),permutation.end()); } [DllImport("PEDLL.dll", CallingConvention = CallingConvention.Cdecl)] private static extern void NextPermutation(IntPtr test);

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Creating a series of vectors from a vector

    - by bluetongue
    I have a simple two vector dataframe (length=30) that looks something like this: > mDF Param1 w.IL.L 1 AuZgFw 0.5 2 AuZfFw 2 3 AuZgVw 74.3 4 AuZfVw 20.52 5 AuTgIL 80.9 6 AuTfIL 193.3 7 AuCgFL 0.2 8 ... I'd like to use each of the rows to form 30 single value numeric vectors with the name of the vector taken from mDF$Param1, so that: > AuZgFw [1] 0.5 etc I've tried melting and casting, but I suspect there may be an easier way?? Thanks in advance BT

    Read the article

  • Why Does Piping Binary Text to the Screen often Horck a Terminal

    - by Alan Storm
    Imaginary Situation: You’ve used mysqldump to create a backup of a mysql database. This database has columns that are blobs. That means your “text” dump files contains both strings and binary data (binary data stored as strings?) If you cat this file to the screen $ cat dump.mysql you’ll often get unexpected results. The terminal will start beeping, and then the output finishes scrolling by you’ll often have garbage chacters entered on your terminal as through you’d typed them, and sometimes your prompts and anything you type will be garbage characters. Why does this happen? Put another way, I think I’m looking for an overview of what’s actually happening when you store binary strings into a file, and when you cat those files, and when the results of the cat are reported to the terminal, and any other steps I’m missing.

    Read the article

  • Converting convex hull to binary mask

    - by Jonas
    I want to generate a binary mask that has ones for all pixels inside and zeros for all pixels outside a volume. The volume is defined by the convex hull around a set of 3D coordinates (<100; some of the coordinates are inside the volume). I can get the convex hull using CONVHULLN, but how do I convert that into a binary mask? In case there is no good way to go via the convex hull, do you have any other idea how I could create the binary mask?

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >