Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 200/376 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Why do you have to mark a class with the attribute [serializable] ?

    - by Blankman
    Seeing as you can convert any document to a byte array and save it to disk, and then rebuild the file to its original form (as long as you have meta data for its filename etc.). Why do you have to mark a class with [Serializable] etc? Is that just the same idea, "meta data" type information so when you cast the object to its class things are mapped properly?

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

  • Why won't the following haskell code compile?

    - by voxcogitatio
    I'm in the process of writing a small lisp interpreter in haskell. In the process i defined this datatype, to get a less typed number; data Number = _Int Integer | _Rational Rational | _Float Double deriving(Eq,Show) Compiling this fails with the following error: ERROR "types.hs":16 - Syntax error in data type declaration (unexpected `|') Line 16 is the line w. the first '|' in the code above.

    Read the article

  • Why are so many new languages written for the Java VM?

    - by sdudo
    There are more and more programming languages (Scala, Clojure,...) coming out that are made for the Java VM and are therefore compatible with the Java Byte-Code. I'm beginning to ask myself: Why the Java VM? What makes it so powerful or popular that there are new programming languages, which seem gaining popularity too, created for it? Why don't they write a new VM for a new language?

    Read the article

  • Communicate between separate MPI-Programs

    - by Fyg
    I have the following problem: Program 1 has a huge amount of data, say 10GB. The data in question consists of large integer- and double-arrays. Program 2 has 1..n MPI processes that use tiles of this data to compute results. How can I send the data from program 1 to the MPI Processes? Using File I/O is out of question. The compute node has sufficient RAM.

    Read the article

  • warcraft3 packet infromation [closed]

    - by ajay009ajay
    Hello All, I have made a program which is fetching data from server to and game to server. I want to keep these record in my file. But my problem is this is not in good format that i can read easily. I am reading all data as "Byte" (from java). Can anybody explain header or data info of packet. so I can read it in human manner Huh thanks.

    Read the article

  • Fundamental types

    - by smerlin
    I always thought the following types are "fundamental types", so i thought my anwser to this question would be correct, but surprisingly it got downvoted... Searching the web, i found this. So, IBM says aswell those types are fundamental types.. Well how do you interpret the Standard, are following types (and similar types), "fundamental types" according to the c++ standard ? unsigned int signed char long double long long long long int unsigned long long int

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • How do I get the file size of a large (> 4 GB) file?

    - by endeavormac
    How can I get the file size of a file in C when the file size is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • How to handle when SSRS does not automatically update fields based on database query?

    - by badpanda
    So I am trying to change the number of fields in my dataset in SSRS and the refresh button is not picking up the added field from the SQL server. The query is definitely returning the correct data, as I have double checked in the server engine itself. Also, I have tried manually adding the field using the SSRS menu, but as soon as I execute it disappears. Any suggestions or similar experiences?

    Read the article

  • XmlSerializer.Deserialize blocks over NetworkStream

    - by Luca
    I'm trying to sends XML serializable objects over a network stream. I've already used this on an UDP broadcast server, where it receive UDP messages from the local network. Here a snippet of the server side: while (mServiceStopFlag == false) { if (mSocket.Available > 0) { IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Any, DiscoveryPort); byte[] bData; // Receive discovery message bData = mSocket.Receive(ref ipEndPoint); // Handle discovery message HandleDiscoveryMessage(ipEndPoint.Address, bData); ... Instead this is the client side: IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Broadcast, DiscoveryPort); MemoryStream mStream = new MemoryStream(); byte[] bData; // Create broadcast UDP server mSocket = new UdpClient(); mSocket.EnableBroadcast = true; // Create datagram data foreach (NetService s in ctx.Services) XmlHelper.SerializeClass<NetService>(mStream, s); bData = mStream.GetBuffer(); // Notify the services while (mServiceStopFlag == false) { mSocket.Send(bData, (int)mStream.Length, ipEndPoint); Thread.Sleep(DefaultServiceLatency); } It works very fine. But now i'me trying to get the same result, but on a TcpClient socket, but the using directly an XMLSerializer instance: On server side: TcpClient sSocket = k.Key; ServiceContext sContext = k.Value; Message msg = new Message(); while (sSocket.Connected == true) { if (sSocket.Available > 0) { StreamReader tr = new StreamReader(sSocket.GetStream()); msg = (Message)mXmlSerialize.Deserialize(tr); // Handle message msg = sContext.Handler(msg); // Reply with another message if (msg != null) mXmlSerialize.Serialize(sSocket.GetStream(), msg); } else Thread.Sleep(40); } And on client side: NetworkStream mSocketStream; Message rMessage; // Network stream mSocketStream = mSocket.GetStream(); // Send the message mXmlSerialize.Serialize(mSocketStream, msg); // Receive the answer rMessage = (Message)mXmlSerialize.Deserialize(mSocketStream); return (rMessage); The data is sent (Available property is greater then 0), but the method XmlSerialize.Deserialize (which should deserialize the Message class) blocks. What am I missing?

    Read the article

  • SDL_GL_SwapBuffers Segfault

    - by RyanG
    I'm getting a segfault that GDB says is coming from SDL_GL_SwapBuffers. However, I can't imagine why. The SDL documentation mentions no specific pre-conditions for calling swapBuffers except that double buffering be allowed. Is this an option I have to turn on while initializing OpenGL or is this a hardware capability thing? My code: http://pastie.org/859721 (Ignore the unused variables, strange comments and other things. I haven't prettied this up at all. :P)

    Read the article

  • text from a file turned into a variable?

    - by b3y4z1d
    If I made a program that stores strings on a text file using the "list"-function(#include ), and then I want to copy all of the text from that file and call it something(so I can tell the program to type in all of the text I copied somewhere by using that one variable to refer to the text), do I use a string,double,int or what do I declare that chunk of text as? I'm making the program using c++ in a simple console application.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >