Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 69/159 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • j2me or android file upload to jsp

    - by user313613
    hi i new to mobile development i like to upload the file from blackberry and android how to develop the mobile side to this jsp page. please do reply me thanks here i mention the jsp file from roseindia.net. <%@ page import="java.io.*" % <% //to get the content type information from JSP Request Header String contentType = request.getContentType(); //here we are checking the content type is not equal to Null and as well as the passed data from mulitpart/form-data is greater than or equal to 0 if ((contentType != null) && (contentType.indexOf("multipart/form-data") = 0)) { DataInputStream in = new DataInputStream(request. getInputStream()); //we are taking the length of Content type data int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead, formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\") + 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1, contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)) .getBytes()).length; // creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream(saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); %><Br><table border="2"><tr><td><b>You have successfully upload the file by the name of: <% out.println(saveFile); % <% } %

    Read the article

  • How do I Convert ARGB value from string to colour?

    - by James
    I am trying to use the MakeColor method in the GDIPAPI unit but the conversion from int to byte is not returning me the correct value. Example var argbStr: string; A, R, G, B: Byte; begin argbStr := 'ffffcc88'; A := StrToInt('$' + Copy(AValue, 1, 2)); R := StrToInt('$' + Copy(AValue, 3, 2)); G := StrToInt('$' + Copy(AValue, 5, 2)); B := StrToInt('$' + Copy(AValue, 7, 2)); Result := MakeColor(A, R, G, B); end; What am I doing wrong?

    Read the article

  • python input UnicodeDecodeError:

    - by The man on the Clapham omnibus
    python 3.x >>> a = input() hope >>> a 'hope' >>> b = input() håpe >>> b 'håpe' >>> c = input() start typing hå... delete using backspace... and change to hope Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 1: invalid continuation byte >>> The situation is not terrible, I am working around it, but find it strange that when deleting, the bytes get messed up. Has anyone else experienced this? the terminal history shows that I thought that I entered h?ope any ideas? in the script that is using this, I do import readline to give command line history.

    Read the article

  • delete & new in c++

    - by singh
    Hi This may be very simple question,But please help me. i wanted to know what exactly happens when i call new & delete , For example in below code char * ptr=new char [10]; delete [] ptr; call to new returns me memory address. Does it allocate exact 10 bytes on heap, Where information about size is stored.When i call delete on same pointer,i see in debugger that there are a lot of byte get changed before and after the 10 Bytes. Is there any header for each new which contain information about number of byte allocated by new. Thanks a lot

    Read the article

  • What is the best way to convert this java code into Objective C code??

    - by LCYSoft
    public byte[] toBytes() { size = 12; ByteBuffer buf = ByteBuffer.allocate(size); buf.putInt(type.ordinal());//type is a enum buf.putInt(id); buf.putInt(size); return buf.array(); } @Override public void fromBytes(byte[] data) { ByteBuffer buf = ByteBuffer.allocate(data.length); buf.put(data); buf.rewind(); type = MessageType.values()[buf.getInt()]; id = buf.getInt(); size = buf.getInt(); } Thanks in advance :)

    Read the article

  • C# Array number sorting

    - by athgap
    Hi, I have an array of numbers jumbled up from 0-9. How do I sort them in ascending order? Array.Sort doesn't work for me. Is there another way to do it? Thanks in advance. EDIT: Array.Sort gives me this error. Argument 1: cannot convert from 'string' to 'System.Array' Right now it gives me this output: 0) VersionInfo.xml 2) luizafroes_singapore2951478702.xml 3) virua837890738.xml 4) darkwizar9102314425644.xml 5) snarterz_584609551.xml 6) alysiayeo594136055.xml 1) z-a-n-n2306499277.xml 7) zhangliyi_memories932668799030.xml 8) andy_tan911368887723.xml 9) config.xml k are the numbers from 0-9 string[] valnames = rk2.GetValueNames(); foreach (string k in valnames) { if (k == "MRUListEx") { continue; } Byte[] byteValue = (Byte[])rk2.GetValue(k); UnicodeEncoding unicode = new UnicodeEncoding(); string val = unicode.GetString(byteValue); Array.Sort(k); //Error here richTextBoxRecentDoc.AppendText("\n" + k + ") " + val + "\n"); }

    Read the article

  • how to retrive String from DatagramPacket

    - by sajith
    the following code prints [B@40545a60,[B@40545a60abc exp but i want to print abc,so that i can retrive the correct message from the receiving system public class Operation { InetAddress ip; DatagramSocket dsock; DatagramPacket pack1; byte[] bin,bout; WifyOperation(InetAddress Systemip) { ip=Systemip; try { dsock=new DatagramSocket(); } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); } } void sendbyte() { String senddata="abc"+"123"; bout=senddata.getBytes(); pack1=new DatagramPacket(bout,bout.length,ip,3322); try { dsock.send(pack1); Log.d(pack1.getData().toString(),"abc exp"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } how i retrieve string instead of byte from the packet pack1

    Read the article

  • reading csv file without for

    - by Abruzzo Forte e Gentile
    Hi All I need to read a CSV file in python. Since for last row I receive a 'NULL byte' error I would like to avoid using for keyword but the while. Do you know how to do that? reader = csv.reader( file ) for row in reader # I have an error at this line # do whatever with row I want to substitute the for-loop with a while-loop so that I can check if the row is NULL or not. What is the function for reading a single row in the CSV module? Thanks Thanks p.S. below the traceback Traceback (most recent call last): File "FetchNeuro_TodayTrades.py", line 189, in for row in reader: _csv.Error: line contains NULL byte

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • How to use Crypto++ to extract the textual information in a file?

    - by JL
    I have a file that is signed with a certificate located here. CrytoAPI has not worked out for me because of server differences in 2003 / 2008+, and different file inputs. I am now considering using Crypto++ to get the job done. Essentially, all I would like to do is extract the text information from this file, and others like it, and save it as XML. There are some bits in the XML that are marked as < encoded data but those sections are just base64 encoded, so before I can get to the XML envelope, I need to deal with the certificate thats obfuscating the plain text. Anyone with experience in Crypto++ know how this is done? With CrytoAPI, I was doing something like this : byte[] fileContents = File.ReadAllBytes(outFileName); var contentInfo = new ContentInfo(fileContents); var signedCms = new SignedCms(contentInfo); signedCms.Decode(fileContents); signedCms.RemoveSignature(0); byte[] outfileContent = signedCms.ContentInfo.Content;

    Read the article

  • c++ normalizing data sizes across systems

    - by Bocochoco
    I have a struct with three variables: two unsigned ints and an unsigned char. From my understanding, a c++ char is always 1 byte regardless of what operating system it is on. The same can't be said for other datatypes. I am looking for a way to normalize POD's so that when saved into a binary file, the resulting file is readable on any operating system that the code is compiled for. I changed my struct to use a 1-byte alignment by adding #pragma as follows: #pragma pack(push, 1) struct test { int a; } #pragma pack(pop) but that doesn't necessarily mean that int a is exactly 4 bytes on every os, I don't think? Is there a way to ensure that a file saved from my code will always be readable?

    Read the article

  • How to use SQL trigger to record the affected column's row number

    - by Freeman
    I want to have an 'updateinfo' table in order to record every update/insert/delete operations on another table. In oracle I've written this: CREATE TABLE updateinfo ( rnumber NUMBER(10), tablename VARCHAR2(100 BYTE), action VARCHAR2(100 BYTE), UPDATE_DATE date ) DROP TRIGGER TRI_TABLE; CREATE OR REPLACE TRIGGER TRI_TABLE AFTER DELETE OR INSERT OR UPDATE ON demo REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW BEGIN if inserting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'insert',sysdate); elsif updating then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'update',sysdate); elsif deleting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'delete',sysdate); end if; -- EXCEPTION -- WHEN OTHERS THEN -- Consider logging the error and then re-raise -- RAISE; END TRI_TABLE; but when checking updateinfo, all rnumber column is zero. is there anyway to retrieve the correct row number?

    Read the article

  • Bitmap brightness issue in c++

    - by Suriyan Suresh
    I have used the following code to adjust the image brightness, i am testing this application in Samsung BADA Platform and its SDK, While i am running this application in bada simulator it never ends runs infinity. Please point out the mistake in the code int BitmapWidth = 0, BitmapHeight = 0; result r = E_SUCCESS; BufferInfo myBuffer; Osp::Media::Image *pImage = null; Osp::Graphics::Canvas *pCanvas = null; Osp::Graphics::Rectangle *pRect = null; String path("/Media/Images/tom1.jpg"); pImage = new Osp::Media::Image(); r = pImage->Construct(); pBitmap2 = pImage->DecodeN(path, BITMAP_PIXEL_FORMAT_ARGB8888,LCD_WIDTH, LCD_HEIGHT); BitmapWidth = pBitmap2->GetWidth(); BitmapHeight = pBitmap2->GetHeight(); pBitmap2->Lock( myBuffer); int nVal = 0; int stride = myBuffer.pitch; byte *p= (byte *)(void *)myBuffer.pPixels; int nWidth = BitmapWidth *3; int nOffset = stride - BitmapWidth*4; for (int y = 0; y < BitmapHeight; ++y) { for (int x = 0; x < nWidth; ++x) { nVal = (int) (p[0] + nBrightness); if (nVal < 0) nVal = 0; if (nVal > 255) nVal = 255; p[0] = (byte) nVal; ++p; } p+= nOffset; } pBitmap2->Unlock(); pCanvas = GetCanvasN(); // Step 3: Create Rectangle pRect = new Osp::Graphics::Rectangle(0, 0, LCD_WIDTH, LCD_HEIGHT); r = pCanvas->DrawBitmap(*pRect, *pBitmap2); pCanvas->Show(); RequestRedraw(true); delete pBitmap2; delete pCanvas; delete pRect;

    Read the article

  • Simplest way to create a wrapper class around some strings for a WPF DataGrid?

    - by Joel
    I'm building a simple hex editor in C#, and I've decided to use each cell in a DataGrid to display a byte*. I know that DataGrid will take a list and display each object in the list as a row, and each of that object's properties as columns. I want to display rows of 16 bytes each, which will require a wrapper with 16 string properties. While doable, it's not the most elegant solution. Is there an easier way? I've already tried creating a wrapper around a public string array of size 16, but that doesn't seem to work. Thanks *The rational for this is that I can have spaces between each byte without having to strip them all out when I want to save my edited file. Also it seems like it'll be easier to label the rows and columns.

    Read the article

  • Java serial comm notifyOnDataAvailable configure receive buffer size?

    - by fred basset
    Hi All, I have a Java serial driver that's using the notifyOnDataAvailable mode to enable async. receive notification. I see an occasional problem where the SerialPortEvent.DATA_AVAILABLE serial event is not called until a relatively large no. of characters have been received (e.g. 34). The problem is that the sender sent a 20 byte packet, so the Java receiver did not send an ACK until the sender did a retry of the 20 byte send. Is there any way in Java COMM to configure the size of the receive buffer?

    Read the article

  • Change HttpContext.Request.InputStream

    - by user320478
    I am getting lot of errors for HttpRequestValidationException in my event log. Is it possible to HTMLEncode all the inputs from override of ProcessRequest on web page. I have tried this but it gives context.Request.InputStream.CanWrite == false always. Is there any way to HTMLEncode all the feilds when request is made? public override void ProcessRequest(HttpContext context) { if (context.Request.InputStream.CanRead) { IEnumerator en = HttpContext.Current.Request.Form.GetEnumerator(); while (en.MoveNext()) { //Response.Write(Server.HtmlEncode(en.Current + " = " + //HttpContext.Current.Request.Form[(string)en.Current])); } long nLen = context.Request.InputStream.Length; if (nLen > 0) { string strInputStream = string.Empty; context.Request.InputStream.Position = 0; byte[] bytes = new byte[nLen]; context.Request.InputStream.Read(bytes, 0, Convert.ToInt32(nLen)); strInputStream = Encoding.Default.GetString(bytes); if (!string.IsNullOrEmpty(strInputStream)) { List<string> stream = strInputStream.Split('&').ToList<string>(); Dictionary<int, string> data = new Dictionary<int, string>(); if (stream != null && stream.Count > 0) { int index = 0; foreach (string str in stream) { if (str.Length > 3 && str.Substring(0, 3) == "txt") { string textBoxData = str; string temp = Server.HtmlEncode(str); //stream[index] = temp; data.Add(index, temp); index++; } } if (data.Count > 0) { List<string> streamNew = stream; foreach (KeyValuePair<int, string> kvp in data) { streamNew[kvp.Key] = kvp.Value; } string newStream = string.Join("", streamNew.ToArray()); byte[] bytesNew = Encoding.Default.GetBytes(newStream); if (context.Request.InputStream.CanWrite) { context.Request.InputStream.Flush(); context.Request.InputStream.Position = 0; context.Request.InputStream.Write(bytesNew, 0, bytesNew.Length); //Request.InputStream.Close(); //Request.InputStream.Dispose(); } } } } } } base.ProcessRequest(context); }

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • JSP: I am doing an application in which i have to download ppt file .

    - by Sanjeev
    I am doing an application in which i have to download ppt file using a jsp page. I am using The following code but its not working <% try { String filename = "file/abc.ppt"; // set the http content type to "APPLICATION/OCTET-STREAM response.setContentType("APPLICATION/OCTET-STREAM"); // initialize the http content-disposition header to // indicate a file attachment with the default filename // "myFile.txt" String disHeader = "Attachment Filename=\"abc.ppt\""; response.setHeader("Content-Disposition", disHeader); // transfer the file byte-by-byte to the response object File fileToDownload = new File(filename); FileInputStream fileInputStream = new FileInputStream(fileToDownload); int i; while ((i=fileInputStream.read())!=-1) { out.write(i); } fileInputStream.close(); out.close(); }catch(Exception e) // file IO errors { e.printStackTrace(); } % can anybody solve this problem...........

    Read the article

  • C++ object in memory

    - by Neo_b
    Hello. Is there a standard in storing a C++ objects in memory? I wish to set a char* pointer to a certain address in memory, so that I can read certain objects' variables directly from the memory byte by byte. When I am using Dev C++, the variables are stored one by one right in the memory address of an object in the order that they were defined. Now, can it be different while using a different compiler (like the variables being in a different order, or somewhere else)? Thank you in advance. :-)

    Read the article

  • C# How to download files from FTP Server

    - by user3696888
    I'm trying to download a file (all kinds of files, exe dll txt etc.). And when I try to run it an error comes up on: using (FileStream ws = new FileStream(destination, FileMode.Create)) This is the error message: Access to the path 'C:\Riot Games\League of Legends\RADS\solutions \lol_game_client_sln\releases\0.0.1.41\deploy'(which is my destination, where I want to save it) is denied. Here is my code void download(string url, string destination) { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(url); request.Method = WebRequestMethods.Ftp.DownloadFile; request.Credentials = new NetworkCredential("user", "password"); request.UseBinary = true; using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) { using (Stream rs = response.GetResponseStream()) { using (FileStream ws = new FileStream(destination, FileMode.Create)) { byte[] buffer = new byte[2048]; int bytesRead = rs.Read(buffer, 0, buffer.Length); while (bytesRead > 0) { ws.Write(buffer, 0, bytesRead); bytesRead = rs.Read(buffer, 0, buffer.Length); } } } }

    Read the article

  • HttpSessionState Where, How, Advantages?

    - by blgnklc
    You see the code below, how I did use the session variable; So the three questions are; 1- Where are they stored? (Server or Client side) 2- Are they unique for each web page visitor? 3- Can I remove it using ajax or simple js code when my job is done with it? or it will be removed automatically..? sbyte[][] arrImages = svc.getImagesForFields(new String[] { "CustomerName", "CustomerSurName" }); Dictionary<string, byte[]> smartImageData = new Dictionary<string, byte[]>(); int i = 0; foreach (sbyte[] bytes in arrImages) { smartImageData.Add(fieldNames[i], ConvertToByte(bytes)); i++; } Session.Add("SmartImageData", smartImageData);

    Read the article

  • How to change size of STL container in C++

    - by Jaime Pardos
    I have a piece of performance critical code written with pointers and dynamic memory. I would like to rewrite it with STL containers, but I'm a bit concerned with performance. Is there a way to increase the size of a container without initializing the data? For example, instead of doing ptr = new BYTE[x]; I want to do something like vec.insert(vec.begin(), x, 0); However this initializes every byte to 0. Isn't there a way to just make the vector grow? I know about reserve() but it just allocates memory, it doesn't change the size of the vector, and doesn't allows me to access it until I have inserted valid data. Thank you everyone.

    Read the article

  • How to create a zip file and keep entries for directories?

    - by NathanZ
    I would like to create a zip archive from a folder and keep entries for (non-empty) directories. In the code below, FileInputStream throws a FileNotFoundException when a directory is passed to AddToZip. I have tried to put a condition around the actual writing of bytes but it makes the whole archive invalid. How can I add directory entries to the archive? public static void addToZip(File directoryToZip, File file, ZipOutputStream zos) throws FileNotFoundException, IOException { String zipFilePath = file.getCanonicalPath().substring(directoryToZip.getCanonicalPath().length() + 1,file.getCanonicalPath().length()); System.out.println("Writing '" + zipFilePath + "' to zip file"); ZipEntry zipEntry = new ZipEntry(zipFilePath); zos.putNextEntry(zipEntry); FileInputStream fis = new FileInputStream(file); // Throws a FileNotFoundException when directory byte[] bytes = new byte[1024]; int length; while ((length = fis.read(bytes)) >= 0) { zos.write(bytes, 0, length); } zos.closeEntry(); fis.close(); }

    Read the article

  • converting from int to hex

    - by Catherine
    I want to convert some ints to hex,but i'm getting something like this : "?|???plL4?h??N{" from 12345. Why? int t = 12345; System.Security.Cryptography.MD5CryptoServiceProvider ano = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] d_ano = System.Text.Encoding.ASCII.GetBytes(t.ToString()); byte[] d_d_ano = ano.ComputeHash(d_ano); string st_data1 = System.Text.Encoding.ASCII.GetString(d_d_ano); string st_data = st_data1.ToString(); I'm using it in window form,not in console.

    Read the article

  • Interconnecting Emulator Instances Android

    - by blah01
    Hi all I want to communicate two emulators via DatagramSocket in Android. Each of them is a Node in a P2P system. Thus each of them has a server Thread and client Thread (created per GUI event). This is how I create server public static final String SERVERIP = "10.0.2.15"; //... run() { InetAddress serverAddr = InetAddress.getByName(SERVERIP); DatagramSocket socket = new DatagramSocket(SERVERPORT,serverAddr); while(true) { byte[] buf = new byte[29]; DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); //... } } The port is given by the user during initializing application. The client part (requesting some data) InetAddress serverAddr = InetAddress.getByName("10.0.2.2"); //... Log.i("Requester", "Trying to connect to device port = "+target); DatagramSocket socketJ = new DatagramSocket(); byte[] bufJ = Adaptor.createStringMsg(Adaptor.createJoingMsg(id, Location.getX(), Location.getY())).getBytes(); DatagramPacket packetJ = new DatagramPacket(bufJ, bufJ.length, serverAddr, target); Log.i("Requester", "Sending: '" + new String(bufJ) + "'"); socketJ.send(packetJ); Log.i("Requester", "Done."); Some additional info. The Node1 (emulatorA) has a server on port 8000 and Node2 (emulatorB) has a server on port 8001. The target port for "client part" is read properly. What tried to do is to set the redirection as such: //emulatorA redir add tcp:8000:8000 //emulatorB redir add tcp:8001:8001 However I can not get any communication beetwen those 2 emulators. As far as I understood the android tutorial about it should work like this redir add tcp:localhostPort:emulatorPort. I'm stuck with it :/. Can anyone point me the mistake or give some good advice. For the record while I was testing communication on a single device (client faking other node) everything worked, so I don't think there is a bug in the code. Btw does any one knows how can I get 2 set of logs for those 2 emulators (logA, logB)? It would help me a lot.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >