Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 210/238 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Can't serve HTML5 video through PHP on Safari/Mac (5.0)

    - by JKS
    I'm encountering a strange bug in Safari where, when I serve MP4 video through PHP (to obfuscate the file beneath the document root with a token-based authentication system), Safari for some reason fires the <video>'s onerror event, and the video never loads (I can't get any useful information out of the event object sent to onerror — everything is undefined). When I access the PHP script directly (i.e., the video is not embedded in a page), the video controls appear momentarily before flashing to a QuickTime question mark. When I access the MP4 file directly, it works as expected. What's bizarre is that the embedded video works perfectly in the latest version of Chrome for Mac. Here are the headers when accessed through PHP: Connection:Keep-Alive Content-Disposition:inline; filename="test.mp4" Content-Length:5558749 Content-Type:video/mp4 Date:Tue, 22 Jun 2010 01:24:25 GMT Keep-Alive:timeout=10, max=29 Server:Apache/2.2.15 (CentOS) mod_ssl/2.2.15 0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By:PHP/5.2.13 And here are the headers when test.mp4 is accessed directly: Accept-Ranges:bytes Connection:Keep-Alive Content-Length:5558749 Content-Type:video/mp4 Date:Tue, 22 Jun 2010 01:26:45 GMT Etag:"1c04757-54d1dd-489944c5a6400" Keep-Alive:timeout=10, max=30 Last-Modified:Tue, 22 Jun 2010 01:25:36 GMT Server:Apache/2.2.15 (CentOS) mod_ssl/2.2.15 0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 The only differing headers are: Accept-Ranges (which I don't think is necessary), Etag, Last-Modified, Content-Disposition, and X-Powered-By. Not only can Chrome handle the PHP-served video fine, but when I use the same script to load the MP4 through a Flash player, it also works fine. I just can't figure out what Safari is choking on. EDIT: Also, when I change the content disposition to "attachment", Safari will download the MP4 file just fine.

    Read the article

  • Why does using the Asynchronous Programming Model in .Net not lead to StackOverflow exceptions?

    - by uriDium
    For example, we call BeginReceive and have the callback method that BeginReceive executes when it has completed. If that callback method once again calls BeginReceive in my mind it would be very similar to recursion. How is that this does not cause a stackoverflow exception. Example code from MSDN: private static void Receive(Socket client) { try { // Create the state object. StateObject state = new StateObject(); state.workSocket = client; // Begin receiving the data from the remote device. client.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private static void ReceiveCallback( IAsyncResult ar ) { try { // Retrieve the state object and the client socket // from the asynchronous state object. StateObject state = (StateObject) ar.AsyncState; Socket client = state.workSocket; // Read data from the remote device. int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { // There might be more data, so store the data received so far. state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead)); // Get the rest of the data. client.BeginReceive(state.buffer,0,StateObject.BufferSize,0, new AsyncCallback(ReceiveCallback), state); } else { // All the data has arrived; put it in response. if (state.sb.Length > 1) { response = state.sb.ToString(); } // Signal that all bytes have been received. receiveDone.Set(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } }

    Read the article

  • PHP Outputting File Attachments with Headers

    - by OneNerd
    After reading a few posts here I formulated this function which is sort of a mishmash of a bunch of others: function outputFile( $filePath, $fileName, $mimeType = '' ) { // Setup $mimeTypes = array( 'pdf' => 'application/pdf', 'txt' => 'text/plain', 'html' => 'text/html', 'exe' => 'application/octet-stream', 'zip' => 'application/zip', 'doc' => 'application/msword', 'xls' => 'application/vnd.ms-excel', 'ppt' => 'application/vnd.ms-powerpoint', 'gif' => 'image/gif', 'png' => 'image/png', 'jpeg' => 'image/jpg', 'jpg' => 'image/jpg', 'php' => 'text/plain' ); // Send Headers //-- next line fixed as per suggestion -- header('Content-Type: ' . $mimeTypes[$mimeType]); header('Content-Disposition: attachment; filename="' . $fileName . '"'); header('Content-Transfer-Encoding: binary'); header('Accept-Ranges: bytes'); header('Cache-Control: private'); header('Pragma: private'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); readfile($filePath); } I have a php page (file.php) which does something like this (lots of other code stripped out): // I run this thru a safe function not shown here $safe_filename = $_GET['filename']; outputFile ( "/the/file/path/{$safe_filename}", $safe_filename, substr($safe_filename, -3) ); Seems like it should work, and it almost does, but I am having the following issues: When its a text file, I am getting a strange symbol as the first letter in the text document When its a word doc, it is corrupt (presumably that same first bit or byte throwing things off). I presume all other file types will be corrupt - have not even tried them Any ideas on what I am doing wrong? Thanks - UPDATE: changed line of code as suggested - still same issue.

    Read the article

  • error in C++, what to do ?: could not find an match for ostream::write(long *, unsigned int)

    - by Shantanu Gupta
    I am trying to write data stored in a binary file using turbo C++. But it shows me an error could not find an match for ostream::write(long *, unsigned int) I want to write a 4 byte long data into that file. When i tries to write data using char pointer. It runs successfully. But i want to store large value i.e. eg. 2454545454 Which can be stored in long only. I dont know how to convert 1 byte into bit. I have 1 byte of data as a character. Moreover what i m trying to do is to convert 4 chars into long and store data into it. And at the other side i want to reverse this so as to retrieve how many bytes of data i have written. long *lmem; lmem=new long; *lmem=Tsize; fo.write(lmem,sizeof(long));// error occurs here delete lmem; I am implementing steganography and i have successfully stored txt file into image but trying to retrieve that file data now.

    Read the article

  • How to properly recreate BITMAP, that was previously shared by CreateFileMapping()?

    - by zim22
    Dear friends, I need your help. I need to send .bmp file to another process (dialog box) and display it there, using MMF(Memory Mapped File) But the problem is that image displays in reversed colors and upside down. In first application I open picture from HDD and link it to the named MMF "Gigabyte_picture" HANDLE hFile = CreateFile("123.bmp", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); CreateFileMapping(hFile, NULL, PAGE_READONLY, 0, 0, "Gigabyte_picture"); In second application I open mapped bmp file and at the end I display m_HBitmap on the static component, using SendMessage function. HANDLE hMappedFile = OpenFileMapping(FILE_MAP_READ, FALSE, "Gigabyte_picture"); PBYTE pbData = (PBYTE) MapViewOfFile(hMappedFile, FILE_MAP_READ, 0, 0, 0); BITMAPINFO bmpInfo = { 0 }; LONG lBmpSize = 60608; // size of the bmp file in bytes bmpInfo.bmiHeader.biBitCount = 32; bmpInfo.bmiHeader.biHeight = 174; bmpInfo.bmiHeader.biWidth = 87; bmpInfo.bmiHeader.biPlanes = 1; bmpInfo.bmiHeader.biSizeImage = lBmpSize; bmpInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); UINT * pPixels = 0; HDC hDC = CreateCompatibleDC(NULL); HBITMAP m_HBitmap = CreateDIBSection(hDC, &bmpInfo, DIB_RGB_COLORS, (void **)& pPixels, NULL, 0); SetBitmapBits(m_HBitmap, lBmpSize, pbData); SendMessage(gStaticBox, STM_SETIMAGE, (WPARAM)IMAGE_BITMAP,(LPARAM)m_HBitmap); ///////////// HWND gStaticBox = CreateWindowEx(0, "STATIC","", SS_CENTERIMAGE | SS_REALSIZEIMAGE | SS_BITMAP | WS_CHILD | WS_VISIBLE, 10,10,380, 380, myDialog, (HMENU)-1,NULL,NULL);

    Read the article

  • C to Assembly code - what does it mean

    - by Smith
    I'm trying to figure out exactly what is going on with the following assembly code. Can someone go down line by line and explain what is happening? I input what I think is happening (see comments) but need clarification. .file "testcalc.c" .section .rodata.str1.1,"aMS",@progbits,1 .LC0: .string "x=%d, y=%d, z=%d, result=%d\n" .text .globl main .type main, @function main: leal 4(%esp), %ecx // establish stack frame andl $-16, %esp // decrement %esp by 16, align stack pushl -4(%ecx) // push original stack pointer pushl %ebp // save base pointer movl %esp, %ebp // establish stack frame pushl %ecx // save to ecx subl $36, %esp // alloc 36 bytes for local vars movl $11, 8(%esp) // store 11 in z movl $6, 4(%esp) // store 6 in y movl $2, (%esp) // store 2 in x call calc // function call to calc movl %eax, 20(%esp) // %esp + 20 into %eax movl $11, 16(%esp) // WHAT movl $6, 12(%esp) // WHAT movl $2, 8(%esp) // WHAT movl $.LC0, 4(%esp) // WHAT?!?! movl $1, (%esp) // move result into address of %esp call __printf_chk // call printf function addl $36, %esp // WHAT? popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3" .section .note.GNU-stack,"",@progbits Original code: #include <stdio.h> int calc(int x, int y, int z); int main() { int x = 2; int y = 6; int z = 11; int result; result = calc(x,y,z); printf("x=%d, y=%d, z=%d, result=%d\n",x,y,z,result); }

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • FileConnection Blackberry memory usage

    - by Dean
    Hello, I'm writing a blackberry application that reads ints and strings out of a database. This is my first time dealing with reading/writing on the blackberry, so forgive me if this is a dumb question. The database file I'm reading is only about 4kB I open the file with the following code fconn = (FileConnection) Connector.open("file_path_here", Connector.READ); if(fconn.exists()==false){fconn.close();return;} is = fconn.openDataInputStream(); while(!eof){ //etc... } is.close(); fconn.close(); The problem is, this code appears to be eating a LOT of memory. Using breakpoints and the "Memory Statistics" view, I determined the following: calling "Connector.open" creates 71 objects and changes "RAM Bytes in use" by 5376 calling "fconn.openDataInputStream();" increases RAM usage by a whopping 75920 Is this normal? Or am I doing something wrong? And how can I fix this? 75MB of RAM is a LOT of memory to waste on a handheld device, especially when the file I'm reading is only 4kB and I haven't even begun reading any data! How is this even possible?

    Read the article

  • GWT with JPA - no persistence provider...

    - by meliniak
    GWT with JPA There are two projects in my eclipse workspace, let's name them: -JPAProject -GWTProject JPAProject contains JPA configuration stuff (persistence.xml, entity classes and so on). GWTProject is an examplary GWT project (taken from official GWT tutorial). Both projects work fine alone. That is, I can create EMF (EntityManagerFactory) in JPAProject and get entities from the database. GWTProject works fine too, I can run it, fill the field text in the browser and get the response. My goal is to call JPAProject from GWTProject to get entities. But the problem is that when calling DAO, I get the following exception: [WARN] Server class 'com.emergit.service.dao.profile.ProfileDaoService' could not be found in the web app, but was found on the system classpath [WARN] Adding classpath entry 'file:/home/maliniak/workspace/emergit/build/classes/' to the web app classpath for this session [WARN] /gwttest/greet javax.persistence.PersistenceException: No Persistence provider for EntityManager named emergitPU at javax.persistence.Persistence.createEntityManagerFactory(Unknown Source) at javax.persistence.Persistence.createEntityManagerFactory(Unknown Source) at com.emergit.service.dao.profile.JpaProfileDaoService.<init>(JpaProfileDaoService.java:19) at pl.maliniak.server.GreetingServiceImpl.<init>(GreetingServiceImpl.java:21) . . . at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) [ERROR] 500 - POST /gwttest/greet (127.0.0.1) 3812 bytes I guess that the warnings at the beginning can be omitted for now. Do you have any ideas? I guess I am missing some basic point. All hints are highly apprecieable.

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • MMGR Questions, code use and thread-saftey

    - by chadb
    1) Is MMGR thread safe? 2) I was hoping someone could help me understand some code. I am looking at something where a macro is used, but I don't understand the macro. I know it contains a function call and an if check, however, the function is a void function. How does wrapping "(m_setOwner (FILE,_LINE_,FUNCTION),false)" ever change return types? #define someMacro (m_setOwner(__FILE__,__LINE__,__FUNCTION__),false) ? NULL : new ... void m_setOwner(const char *file, const unsigned int line, const char *func); 3) What is the point of the reservoir? 4) On line 770 ("void *operator new(size_t reportedSize)" there is the line "// ANSI says: allocation requests of 0 bytes will still return a valid value" Who/what is ANSI in this context? Do they mean the standards? 5) This is more of C++ standards, but where does "reportedSize" come from for "void *operator new(size_t reportedSize)"? 6) Is this the code that is actually doing the allocation needed? "au-actualAddress = malloc(au-actualSize);"

    Read the article

  • Providing downloads on ASP.net website

    - by Dave
    I need to provide downloads of large files (upwards of 2 GB) on an ASP.net website. It has been some time since I've done something like this (I've been in the thick-client world for awhile now), and was wondering on current best practices for this. Ideally, I would like: To be able to track download statistics: # of downloads is essential; actual bytes sent would be nice. To provide downloads in a way that "plays nice" with third-party download managers. Many of our users have unreliable internet connections, and being able to resume a download is a must. To allow multiple users to download the same file simultaneously. My download files are not security-sensitive, so providing a direct link ("right-click to download...") is a possibility. Is just providing a direct link sufficient, letting IIS handle it, and then using some log analyzer service (any recommendations?) to compile and report the statistics? Or do I need to intercept the download request, store some info in a database, then send a custom Response? Or is there an ASP.net user control (built-in or third party) that does this? I appreciate all suggestions.

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Creating Binary Block from struct

    - by MOnsDaR
    I hope the title is describing the problem, i'll change it if anyone has a better idea. I'm storing information in a struct like this: struct AnyStruct { AnyStruct : testInt(20), testDouble(100.01), testBool1(true), testBool2(false), testBool3(true), testChar('x') {} int testInt; double testDouble; bool testBool1; bool testBool2; bool testBool3; char testChar; std::vector<char> getBinaryBlock() { //how to build that? } } The struct should be sent via network in a binary byte-buffer with the following structure: Bit 00- 31: testInt Bit 32- 61: testDouble most significant portion Bit 62- 93: testDouble least significant portion Bit 94: testBool1 Bit 95: testBool2 Bit 96: testBool3 Bit 97-104: testChar According to this definition the resulting std::vector should have a size of 13 bytes (char == byte) My question now is how I can form such a packet out of the different datatypes I've got. I've already read through a lot of pages and found datatypes like std::bitset or boost::dynamic_bitset, but neither seems to solve my problem. I think it is easy to see, that the above code is just an example, the original standard is far more complex and contains more different datatypes. Solving the above example should solve my problems with the complex structures too i think. One last point: The problem should be solved just by using standard, portable language-features of C++ like STL or Boost (

    Read the article

  • What's the best way to communicate the purpose of a string parameter in a public API?

    - by Dave
    According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior: A non-linguistic identifier, where bytes match exactly. A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services. Culturally-agnostic data, which still is linguistically relevant. Data that requires local linguistic customs. Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines. Consider the following methods: f(string this_is_a_linguistic_string) g(string this_is_a_symbolic_identifier_so_use_ordinal_compares) Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string? Now consider the following case: h(Dictionary<string, object> dictionary) Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Questions regarding detouring by modifying the virtual table

    - by Elliott Darfink
    I've been practicing detours using the same approach as Microsoft Detours (replace the first five bytes with a jmp and an address). More recently I've been reading about detouring by modifying the virtual table. I would appreciate if someone could shed some light on the subject by mentioning a few pros and cons with this method compared to the one previously mentioned! I'd also like to ask about patched vtables and objects on the stack. Consider the following situation: // Class definition struct Foo { virtual void Call(void) { std::cout << "FooCall\n"; } }; // If it's GCC, 'this' is passed as the first parameter void MyCall(Foo * object) { std::cout << "MyCall\n"; } // In some function Foo * foo = new Foo; // Allocated on the heap Foo foo2; // Created on the stack // Arguments: void ** vtable, uint offset, void * replacement PatchVTable(*reinterpret_cast<void***>(foo), 0, MyCall); // Call the methods foo->Call(); // Outputs: 'MyCall' foo2.Call(); // Outputs: 'FooCall' In this case foo->Call() would end up calling MyCall(Foo * object) whilst foo2.Call() call the original function (i.e Foo::Call(void) method). This is because the compiler will try to decide any virtual calls during compile time if possible (correct me if I'm wrong). Does that mean it does not matter if you patch the virtual table or not, as long as you use objects on the stack (not heap allocated)?

    Read the article

  • Conditional deserialization

    - by Yordan Pavlov
    I am still not sure whether the title of my question is correct and it most probably is not. However I have spent some time searching both the net and stackoverflow and I can not find a good description of the issue I am facing. Basically what I want to achieve is the ability to read some raw bytes and based on the value of some of them to interpret the rest in different ways. This is how TLV works in a way, you check the tag and depending on it - interpret the result. Of course I can always keep that logic in my C++ code, however I am looking for a solution which would move the logic out of the source code (maybe to some xml description). This would allow me to describe different encodings (protocols) more easily. I am familiar with Protocol Buffers and some other serialization libraries, however all of them solve different issue. They assume they are on both ends of the communication, while I want to describe the communication (sort of). Is such solution available, if no - why not? Are there some inherent difficulties I will face trying to implement it.

    Read the article

  • How to use linux csplit to chop up massive XML file?

    - by Fred
    Hi everyone, I have a gigantic (4GB) XML file that I am currently breaking into chunks with linux "split" function (every 25,000 lines - not by bytes). This usually works great (I end up with about 50 files), except some of the data descriptions have line breaks, and so frequently the chunk files do not have the proper closing tags - and my parser chokes halfway through processing. Example file: (note: normally each "listing" xml node is supposed to be on its own line) <?xml version="1.0" encoding="UTF-8"?> <listings> <listing><date>2009-09-22</date><desc>This is a description WITHOUT line breaks and works fine with split</desc><more_tags>stuff</more_tags></listing> <listing><date>2009-09-22</date><desc>This is a really annoying description field WITH line breaks that screw the split function</desc><more_tags>stuff</more_tags></listing> </listings> Then sometimes my split ends up like <?xml version="1.0" encoding="UTF-8"?> <listings> <listing><date>2009-09-22</date><desc>This is a description WITHOUT line breaks and works fine with split</desc><more_tags>stuff</more_tags></listing> <listing><date>2009-09-22</date><desc>This is a really annoying description field WITH line breaks ... EOF So - I have been reading about "csplit" and it sounds like it might work to solve this issue. I cant seem to get the regular expression right... Basically I want the same output of ~50ish files Something like: *csplit -k myfile.xml '/</listing>/' 25000 {50} Any help would be great Thanks!

    Read the article

  • MD5CryptoServiceProvider ComputeHash Issues between VS 2003 and VS 2008

    - by owensoroke
    I have a database application that generates a MD5 hash and compares the hash value to a value in our DB (SQL 2K). The original application was written in Visual Studio 2003 and a deployed version has been working for years. Recently, some new machines on the .NET framework 3.5 have been having unrelated issues with our runtime. This has forced us to port our code path from Visual Studio 2003 to Visual Studio 2008. Since that time the hash produced by the code is different than the values in the database. The original call to the function posted in code is: RemoveInvalidPasswordCharactersFromHashedPassword(Text_Scrub(GenerateMD5Hash(strPSW))) I am looking for expert guidance as to whether or not the MD5 methods have changed since VS 2K3 (causing this point of failure), or where other possible problems may be originating from. I realize this may not be the best method to hash, but utimately any changes to the MD5 code would force us to change some 300 values in our DB table and would cost us a lot of time. In addition, I am trying to avoid having to redeploy all of the functioning versions of this application. I am more than happy to post other code including the RemoveInvalidPasswordCharactersFromHashedPassword function, or our Text_Scrub if it is necessary to recieve appropriate feedback. Thank you in advance for your input. Public Function GenerateMD5Hash(ByVal strInput As String) As String Dim md5Provider As MD5 ' generate bytes for the input string Dim inputData() As Byte = ASCIIEncoding.ASCII.GetBytes(strInput) ' compute MD5 hash md5Provider = New MD5CryptoServiceProvider Dim hashResult() As Byte = md5Provider.ComputeHash(inputData) Return ASCIIEncoding.ASCII.GetString(hashResult) End Function

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • Read varbinary data in java

    - by masoud farahani
    I made a Java application which reads some files from SQL server. Those files are saved to a varbinary(MAX) field in SQL Server by a third party web service. My problem is that when I want to read those files with my Java application, those binary data show different content in the Java application. In fact, I read data byte by byte and I figured out that some bytes did not show the real values which were saved in the database. I found out what the problem is, but I couldn’t find a solution yet. I found out that in the web service every varbinary data is saved to database as byte data (in .Net each byte takes 0 to 255). But, when I want to read the binary data in Java, it takes different values and cause an exception with some values, because in Java a byte value takes -127 to 127. In my Java application I want to write those data to a file by OutputStream.write(byte[]) method. How can I solve this problem? I think that I have to find a way to convert c# byte[] to a Java byte[] (or binary data), but how can I do that?

    Read the article

  • Querying the size of a drive on a remote server is giving me an error

    - by Testifier
    Here's what I have: public static bool DriveHasLessThanTenPercentFreeSpace(string server) { long driveSize = 0; long freeSpace = 0; var oConn = new ConnectionOptions {Username = "username", Password = Settings.Default.SQLServerAdminPassword}; var scope = new ManagementScope("\\\\" + server + "\\root\\CIMV2", oConn); //connect to the scope scope.Connect(); //construct the query var query = new ObjectQuery("SELECT FreeSpace FROM Win32_LogicalDisk where DeviceID = 'D:'"); //this class retrieves the collection of objects returned by the query var searcher = new ManagementObjectSearcher(scope, query); //this is similar to using a data adapter to fill a data set - use an instance of the ManagementObjectSearcher to get the collection from the query and store it ManagementObjectCollection queryCollection = searcher.Get(); //iterate through the object collection and get what you're looking for - in my case I want the FreeSpace of the D drive foreach (ManagementObject m in queryCollection) { //the FreeSpace value is in bytes freeSpace = Convert.ToInt64(m["FreeSpace"]); //error happens here! driveSize = Convert.ToInt64(m["Size"]); } long percentFree = ((freeSpace / driveSize) * 100); if (percentFree < 10) { return true; } return false; } This line of code is giving me an error: driveSize = Convert.ToInt64(m["Size"]); The error says: ManagementException was unhandled by user code Not found I'm assuming the query to get the drive size is wrong. Please note that I AM getting the freeSpace value at the line: freeSpace = Convert.ToInt64(m["FreeSpace"]); So I know the query IS working for freeSpace. Can anyone give me a hand?

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >