Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 150/173 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • How to dynamically load a progressive jpeg/jpg in actionscrip-3 using Flash and know it's width/heig

    - by didibus
    Hi, I am trying to dynamically load a progressive jpeg using actionscript 3. To do so, I have created a class called Progressiveloader that creates a URLStream and uses it to streamload the progressive jpeg bytes into a byteArray. Everytime the byteArray grows, I use a Loader to loadBytes the byteArray. This works, to some extent, because if I addChild the Loader, I am able to see the jpeg as it is streamed, but I am unable to access the Loader's content and most importantly, I can not change the width and height of the Loader. After a lot of testing, I seem to have figured out the cause of the problem is that until the Loader has completely loaded the jpg, meaning until he actually sees the end byte of the jpg, he does not know the width and height and he does not create a content DisplayObject to be associated with the Loader's content. My question is, would there be a way to actually know the width and height of the jpeg before it is loaded? P.S.: I would believe this would be possible, because of the nature of a progressive jpeg, it is loaded to it's full size, but with less detail, so size should be known. Even when loading a normal jpeg in this way, the size is seen on screen, except the pixels which are not loaded yet are showing as gray. Thank You.

    Read the article

  • Problem in using a second call to send() in C

    - by Paulo Victor
    Hello. Right now I'm working in a simple Server that receives from client a code referring to a certain operation. The server receives this data and send back the signal that it's waiting for the proper data. /*Server Side*/ if (codigoOperacao == 0) { printf("A escolha foi 0\n"); int bytesSent = SOCKET_ERROR; char sendBuff[1080] = "0"; /*Here "send" returns an error msgm while trying to send back the signal*/ bytesSent = send(socketEscuta, sendBuff, 1080, 0); if (bytesSent == SOCKET_ERROR) { printf("Erro ao enviar"); return 0; } else { printf("Bytes enviados : %d\n", bytesSent); char structDesmontada[1080] = ""; bytesRecv = recebeMensagem(socketEscuta, structDesmontada); printf("structDesmontada : %s", structDesmontada); } } Following here is the client code responsible for sending the operation code and receiving the signal char sendMsg[1080] = "0"; char recvMsg[1080] = ""; bytesSent = send(socketCliente, sendMsg, sizeof(sendMsg), 0); printf("Enviei o codigo (%d)\n", bytesSent); /*Here the program blocks in a infinite loop since the server never send anything*/ while (bytesRecv == SOCKET_ERROR) { bytesRecv = recv(socketCliente, recvMsg, 1080, 0); if (bytesRecv > 0) { printf("Recebeu\n"); } Why this is happening only in the second attempt to send some data? Because the first call to send() works fine. Hope someone can help!! Thnks

    Read the article

  • How to store unlimited characters in Oracle 11g?

    - by vicky21
    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons. The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries. Is there any better solution to this problem? UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.

    Read the article

  • Object Oriented Perl interface to read from and write to a socket

    - by user654967
    I need a perl client-server implementation as a wrapper for a server in C#. A perl script passes the server address and port number and an input string to a module, this module has to create the socket and send the input string to the server. The data sent has to follow ISO-8859-1 encoding. On receiving the information, the client has to first receive 3 byte, then the next 8 bytes, this has the length of the data that has to be received next.. so based on the length the client has to read the next data. each of the data that is read has to be stored in a variable and sent another module for further processing. Currently this is what my perl client looks like..which I'm sure isn't right..could someone tell me how to do this..and set me on the right direction.. sub WriteInfo { my ($addr, $port, $Input) = @_; $socket = IO::Socket::INET->new( Proto => "tcp", PeerAddr => $addr, PeerPort => $port, ); unless ($socket) { die "cannot connect to remote" } while (1) { $socket->send($Input); } } sub ReadData { while (1) { my $ExecutionResult = $socket->recv( $recv_data, 3); my $DataLength = $socket->recv( $recv_data, 8); $DataLength =~ s/^0+// ; my $decval = hex($DataLength); my $Data = $socket->recv( $recv_data, $decval); return($Data); } thanks a lot.. Archer

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Boost ASIO async_write "Vector iterator not dereferencable"

    - by xeross
    Hey, I've been working on an async boost server program, and so far I've got it to connect. However I'm now getting a "Vector iterator not dereferencable" error. I suspect the vector gets destroyed or dereferenced before he packet gets sent thus causing the error. void start() { Packet packet; packet.setOpcode(SMSG_PING); send(packet); } void send(Packet packet) { cout << "DEBUG> Transferring packet with opcode " << packet.GetOpcode() << endl; async_write(m_socket, buffer(packet.write()), boost::bind(&Session::writeHandler, shared_from_this(), placeholders::error, placeholders::bytes_transferred)); } void writeHandler(const boost::system::error_code& errorCode, size_t bytesTransferred) { cout << "DEBUG> Transfered " << bytesTransferred << " bytes to " << m_socket.remote_endpoint().address().to_string() << endl; } Start gets called once a connection is made. packet.write() returns a uint8_t vector Would it matter if I'd change void send(Packet packet) to void send(Packet& packet) Not in relation to this problem but performance wise.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Something about Stream

    - by sforester
    I've been working on something that make use of streams and I found myself not clear about some stream concepts( you can also view another question posted by me at http://stackoverflow.com/questions/2933923/about-redirected-stdout-in-system-diagnostics-process ). 1.how do you indicate that you have finished writing a stream, writing something like a EOF? 2.follow the previous question, if I have written a EOF(or something like that) to a stream but didn't close the stream, then I want to write something else to the same stream, can I just start writing to it and no more set up required? 3.if a procedure tries to read a stream(like the stdin ) that no one has written anything to it, the reading procedure will be blocked,finally some data arrives and the procedure will just read till the writing is done,which is indicated by getting a return of 0 count of bytes read rather than being blocked, and now if the procedure issues another read to the same stream, it will still get a 0 count and return immediately while I was expecting it will be blocked since no one is writing to the stream now. So does the stream holds different states when the stream is opened but no one has written to it yet and when someone has finished a writing session? I'm using Windows the .net framework if there will by any thing platform specific. Thanks a lot!

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • Integer array or struct array - which is better?

    - by MusiGenesis
    In my app, I'm storing Bitmap data in a two-dimensional integer array (int[,]). To access the R, G and B values I use something like this: // read: int i = _data[x, y]; byte B = (byte)(i >> 0); byte G = (byte)(i >> 8); byte R = (byte)(i >> 16); // write: _data[x, y] = BitConverter.ToInt32(new byte[] { B, G, R, 0 }, 0); I'm using integer arrays instead of an actual System.Drawing.Bitmap because my app runs on Windows Mobile devices where the memory available for creating bitmaps is severely limited. I'm wondering, though, if it would make more sense to declare a structure like this: public struct RGB { public byte R; public byte G; public byte B; } ... and then use an array of RGB instead of an array of int. This way I could easily read and write the separate R, G and B values without having to do bit-shifting and BitConverter-ing. I vaguely remember something from days of yore about byte variables being block-aligned on 32-bit systems, so that a byte actually takes up 4 bytes of memory instead of just 1 (but maybe this was just a Visual Basic thing). Would using an array of structs (like the RGB example` above) be faster than using an array of ints, and would it use 3/4 the memory or 3 times the memory of ints?

    Read the article

  • Bluetooth txt file byte increases????

    - by cheesebunz
    Hi everyone, i am able to xfer files from 1 mobile device to another. When the sender sends this text file of 8 bytes, the receiver end will become a 256bytes txt file and when i open the contents of the txt file, there are my infos plus alot of square boxes. Here is my code from the sender: string fileName = @"SendTest.txt"; System.Uri uri = new Uri("obex://" + selectedAddr + "/" + System.IO.Path.GetFileName(fileName)); ObexWebRequest request = new ObexWebRequest(uri); Stream requestStream = request.GetRequestStream(); FileStream fs = File.OpenRead(fileName); byte[] buffer = new byte[1024]; int readBytes = 1; while (readBytes != 0) { readBytes = fs.Read(buffer,0, buffer.Length); requestStream.Write(buffer,0, readBytes); } requestStream.Close(); ObexWebResponse response = (ObexWebResponse)request.GetResponse(); MessageBox.Show(response.StatusCode.ToString()); response.Close(); Any1 knws how do i solve it?

    Read the article

  • C++ casted realloc causing memory leak

    - by wyatt
    I'm using a function I found here to save a webpage to memory with cURL: struct WebpageData { char *pageData; size_t size; }; size_t storePage(void *input, size_t size, size_t nmemb, void *output) { size_t realsize = size * nmemb; struct WebpageData *page = (struct WebpageData *)output; page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); if(page->pageData) { memcpy(&(page->pageData[page->size]), input, realsize); page->size += realsize; page->pageData[page->size] = 0; } return realsize; } and find the line: page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); is causing a memory leak of a few hundred bytes per call. The only real change I've made from the original source is casting the line in question to a (char *), which my compiler (gcc, g++ specifically if it's a c/c++ issue, but gcc also wouldn't compile with the uncast statement) insisted upon, but I assume this is the source of the leak. Can anyone elucidate? Thanks

    Read the article

  • C to Assembly code - what does it mean

    - by Smith
    I'm trying to figure out exactly what is going on with the following assembly code. Can someone go down line by line and explain what is happening? I input what I think is happening (see comments) but need clarification. .file "testcalc.c" .section .rodata.str1.1,"aMS",@progbits,1 .LC0: .string "x=%d, y=%d, z=%d, result=%d\n" .text .globl main .type main, @function main: leal 4(%esp), %ecx // establish stack frame andl $-16, %esp // decrement %esp by 16, align stack pushl -4(%ecx) // push original stack pointer pushl %ebp // save base pointer movl %esp, %ebp // establish stack frame pushl %ecx // save to ecx subl $36, %esp // alloc 36 bytes for local vars movl $11, 8(%esp) // store 11 in z movl $6, 4(%esp) // store 6 in y movl $2, (%esp) // store 2 in x call calc // function call to calc movl %eax, 20(%esp) // %esp + 20 into %eax movl $11, 16(%esp) // WHAT movl $6, 12(%esp) // WHAT movl $2, 8(%esp) // WHAT movl $.LC0, 4(%esp) // WHAT?!?! movl $1, (%esp) // move result into address of %esp call __printf_chk // call printf function addl $36, %esp // WHAT? popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3" .section .note.GNU-stack,"",@progbits Original code: #include <stdio.h> int calc(int x, int y, int z); int main() { int x = 2; int y = 6; int z = 11; int result; result = calc(x,y,z); printf("x=%d, y=%d, z=%d, result=%d\n",x,y,z,result); }

    Read the article

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • can't read XML via PHP

    - by jasmine
    I can't find the reason, only see the following error message. Input is not proper UTF-8, indicate encoding ! Bytes: 0x00 0x5D 0x5D 0x3E the followings are my php code $reader2 = new XMLReader(); $reader2->XML($xmlstring); $user_data=""; while ($reader2->read()) { if ($reader2->name == "user_id" && $reader2->nodeType == XMLReader::ELEMENT) { $reader2->read(); $user_data .=$reader2->value; } } $reader2->close(); The followings are XML data <?xml version="1.0" encoding="UTF-8" ?> <SOAP:Envelope xmlns:SOAP="http://www.w3.org/2003/05/soap-envelope" > <SOAP:Body > <user_id>1234567890</user_id> <greeting_name><![CDATA[ABCDEF ..yl/?]]></greeting_name> </SOAP:Body> </SOAP:Envelope> I try a lot of ways,but still can't find the solution. the greeting tag value may be chinese or English words.

    Read the article

  • pointer in c and the c program

    - by sandy101
    Hello, I am studying the pointer and i come to this program .... #include <stdio.h> void swap(int *,int *); int main() { int a=10; int b=20; swap(&a,&b); printf("the value is %d and %d",a,b); return 0; } void swap(int *a,int*b) { int t; t=*a; *a=*b; *b=t; printf("%d and%d\n",*a,*b); } can any one tell me why this main function return the value reversed . The thing i understood till now is that the function call in c does not affect the main function and it's values . I also want to know how much the space a pointer variable occupied like integer have occupied the 2 bytes and the various application use and advantages of the pointer .... plz.... anyone help

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • Strange results from OdbcDataReader reading Sqlite DB

    - by stout
    This method returns some strange results, and was wondering if someone could explain why this is happening, and possibly a solution to get my desired results. Results: FileName = what I'd expect FileSize = what I'd expect Buffer = all bytes = 0 BytesRead = 0 BlobString = string of binary data FieldType = BLOB (what I'd expect) ColumnType = System.String Furthermore, if the file is greater than a few KB, the reader throws an exception stating the StringBuilder capacity argument must be greater than zero (presummably because the size is greater than Int32.MaxValue). I guess my question is how does one properly read large BLOBs from an OdbcDataReader? public static String SaveBinaryFile(String Key) { try { Connect(); OdbcCommand Command = new OdbcCommand("SELECT [_filename_],[_filesize_],[_content_] FROM [_sys_content] WHERE [_key_] = '" + Key + "';", Connection); OdbcDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess); if (Reader.HasRows == false) return null; String FileName = Reader.GetString(0); int FileSize = int.Parse(Reader.GetString(1)); byte[] Buffer = new byte[FileSize]; long BytesRead = Reader.GetBytes(2, 0, Buffer, 0, FileSize); String BlobString = (String)Reader["_content_"]; String FieldType = Reader.GetDataTypeName(2); Type ColumnType = Reader.GetFieldType(2); return null; } catch (Exception ex) { Tools.ErrorHandler.Catch(ex); return null; } }

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Static Vs Non-Static Method Performance C#

    - by dotnetguts
    Hello All, I have few global methods declared in public class in my asp.net web application. I have habbit of declaring all global methods in public class in following format public static string MethodName(parameters) { } I want to know how it would impact on performance point of view? 1) Which one is Better? Static Method or Non-Static Method? 2) Reason why it is better? Following link shows Non-Static methods are good because, static methods are using locks to be Thread-safe. The always do internally a Monitor.Enter() and Monitor.exit() to ensure Thread-safety. http://bytes.com/topic/c-sharp/answers/231701-static-vs-non-static-function-performance And Following link shows Static Methods are good static methods are normally faster to invoke on the call stack than instance methods. There are several reasons for this in the C# programming language. Instance methods actually use the 'this' instance pointer as the first parameter, so an instance method will always have that overhead. Instance methods are also implemented with the callvirt instruction in the intermediate language, which imposes a slight overhead. Please note that changing your methods to static methods is unlikely to help much on ambitious performance goals, but it can help a tiny bit and possibly lead to further reductions. http://dotnetperls.com/static-method I am little confuse which one to use? Thanks

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >