Search Results

Search found 3449 results on 138 pages for 'tranquil byte'.

Page 99/138 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Understanding run time code interpretation and execution

    - by Bob
    I'm creating a game in XNA and was thinking of creating my own scripting language (extremely simple mind you). I know there's better ways to go about this (and that I'm reinventing the wheel), but I want the learning experience more than to be productive and fast. When confronted with code at run time, from what I understand, the usual approach is to parse into a machine code or byte code or something else that is actually executable and then execute that, right? But, for instance, when Chrome first came out they said their JavaScript engine was fast because it compiles the JavaScript into machine code. This implies other engines weren't compiling into machine code. I'd prefer not compiling to a lower language, so are there any known modern techniques for parsing and executing code without compiling to low level? Perhaps something like parsing the code into some sort of tree, branching through the tree, and comparing each symbol and calling some function that handles that symbol? (Wild guessing and stabbing in the dark)

    Read the article

  • Adding Message Part dynamically in Receive Pipeline

    - by Sean
    Hello, I tried to create a custom pipeline component that takes a message and attaches additional another part dynamically (during Disassemble stage). I haven't set up a send port, so that I can see what BizTalk is trying to process. I can see only the body part, the additional part doesn't show up. This is the code I used: var part = pc.GetMessageFactory().CreateMessagePart(); part.Data = new MemoryStream(new byte[] {1, 2, 3, 4, 5}); inmsg.AddPart("another_part", part, false); Thank you.

    Read the article

  • InternetReadFile() corrupting downloads in C

    - by Lienau
    I'm able to download text documents (.html, .txt, etc) but I can't download images or exe's. I'm pretty sure that this is because I'm using a char, and those files are binary. I know that in C# I would use a byte. But what data-type would I use in this case? char buffer[1]; DWORD dwRead; FILE * pFile; pFile = fopen(file,"w"); while (InternetReadFile(hRequest, buffer, 1, &dwRead)) { if(dwRead != 1) break; fprintf(pFile,"%s",buffer); } fclose(pFile);

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

  • C# web request with POST encoding question

    - by rlandster
    On the MSDN site there is an example of some C# code that shows how to make a web request with POST'ed data. Here is an excerpt of that code: WebRequest request = WebRequest.Create ("http://www.contoso.com/PostAccepter.aspx "); request.Method = "POST"; string postData = "This is a test that posts this string to a Web server."; byte[] byteArray = Encoding.UTF8.GetBytes (postData); // (*) request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream (); dataStream.Write (byteArray, 0, byteArray.Length); dataStream.Close (); WebResponse response = request.GetResponse (); ...more... The line marked (*) is the line that puzzles me. Shouldn't the data be encoded using the UrlEncode function than UTF8? Isn't that what application/x-www-form-urlencoded implies?

    Read the article

  • Stream/string/bytearray transformations in Python 3

    - by Craig McQueen
    Python 3 cleans up Python's handling of Unicode strings. I assume as part of this effort, the codecs in Python 3 have become more restrictive, according to the Python 3 documentation compared to the Python 2 documentation. For example, codecs that conceptually convert a bytestream to a different form of bytestream have been removed: base64_codec bz2_codec hex_codec And codecs that conceptually convert Unicode to a different form of Unicode have also been removed (in Python 2 it actually went between Unicode and bytestream, but conceptually it's really Unicode to Unicode I reckon): rot_13 My main question is, what is the "right way" in Python 3 to do what these removed codecs used to do? They're not codecs in the strict sense, but "transformations". But the interface and implementation would be very similar to codecs. I don't care about rot_13, but I'm interested to know what would be the "best way" to implement a transformation of line ending styles (Unix line endings vs Windows line endings) which should really be a Unicode-to-Unicode transformation done before encoding to byte stream, especially when UTF-16 is being used, as discussed this other SO question.

    Read the article

  • FileContentResult and international characters

    - by suzi167
    Hello, I am using a fileContentResult to render a file to the browser. It works well except that it throws an exception when the fileName contains international characters. I remember reading somewhere that this feature does not support international characters but I am sure there mustbe a workaround or a best practice people follow in cases the application needs to upload files in countries other than US. Does anyone know of such a practice?Here is the ActionResult Method public ActionResult GetFile(byte[] value, string fileName) { string fileExtension = Path.GetExtension(fileName); string contentType = GetContentType(fileExtension); //gets the content Type return File(value, contentType, fileName); } THanks in advance Susan

    Read the article

  • What is the best type to use for returning an image in my C# library project?

    - by Sergio Tapia
    I'm making a project that will scrap information about a movie and return all sorts of goodies. This will be a .dll that other developers will use in their projects. For example, if they want to set a pictureBox image to the movies poster, what would be the best way to handle my library returning the image? //Set the image to The Matrix poster. pictureBox1.Image = MyLibrary.GetPoster("The Matrix"); Should I return an Image? A Byte[] array?

    Read the article

  • format specifier for short integer

    - by cateof
    I don't use correctly the format specifiers in C. A few lines of code: int main() { char dest[]="stack"; unsigned short val = 500; char c = 'a'; char* final = (char*) malloc(strlen(dest) + 6); snprintf(final, strlen(dest)+6, "%c%c%hd%c%c%s", c, c, val, c, c, dest); printf("%s\n", final); return 0; } I want my executable to print aa500aastack and not aa500aasta Why I am loosing 2 byte? What is the correct format specifier for an unsighed short integer? thanks.

    Read the article

  • problem with NSInputStream on real iPhone

    - by ThamThang
    Hi guys, I have a problem with NSInputStream. Here is my code: case NSStreamEventHasBytesAvailable: printf("BYTE AVAILABLE\n"); int len = 0; NSMutableData *data = [[NSMutableData alloc] init]; uint8_t buffer[32768]; if(stream == iStream) { printf("Receiving...\n"); len = [iStream read:buffer maxLength:32768]; [data appendBytes:buffer length:len]; } [iStream close]; I try to read small data and it works perfectly on simulator and real iPhone. If I try to read large data (more than 4kB or maybe 5kB), the real iPhone just can read 2736 bytes and stop. Why is it? Help me plz! Merci d'avance!

    Read the article

  • Should I convert overlong UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overlong UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • How to calculate the correct image size in out pdf using itextsharp ?

    - by MK
    I' am trying to add an image to a pdf using itextsharp, regardless of the image size it always appears to be mapped to a different greater size inside the pdf ? The image I add is 624x500 pixel (DPI:72): And here is a screen of the output pdf: And here is how I created the document: Document document = new Document(); System.IO.MemoryStream stream = new MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(document, stream); document.Open(); System.Drawing.Image pngImage = System.Drawing.Image.FromFile("test.png"); Image pdfImage = Image.GetInstance(pngImage, System.Drawing.Imaging.ImageFormat.Png); document.Add(pdfImage); document.Close(); byte[] buffer = stream.GetBuffer(); FileStream fs = new FileStream("test.pdf", FileMode.Create); fs.Write(buffer, 0, buffer.Length); fs.Close(); Any idea why on how to calculate the correct size ?

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • create zip files with arabic characters

    - by fatiDev
    i have the following situation i have to modify an existing files and return a zip containing this modified files , i'm in web application context what i done up to now is : ///////////////// modifying the existing file with poi librairy FileInputStream inpoi = new FileInputStream("file_path"); POIFSFileSystem fs = new POIFSFileSystem(inpoi); HWPFDocument doc = new HWPFDocument(fs); Range r = doc.getRange(); r.replaceText("<nomPrenom>","test"); byte[] b = doc.getDataStream(); //////////////////////// create the zip file and copy the modified files into it ZipOutputStream out = new ZipOutputStream(new FileOutputStream("my.zip")); out.putNextEntry(new ZipEntry("file")); for (int j = 0; j < b.length; j++) { out.write(b[j]); } the created zipped file can't be read correctly with word given that the original file is wrotten in arabic

    Read the article

  • Python and hebrew encoding/decoding error

    - by user340495
    Hey, I have sqlite database which I would like to insert values in Hebrew to I am keep getting the following error : UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 0: ordinal not in range(128) my code is as following : runsql(u'INSERT into personal values(%(ID)d,%(name)s)' % {'ID':1,'name':fabricate_hebrew_name()}) def fabricate_hebrew_name(): hebrew_names = [u'????',u'???',u'???',u'???',u'????',u'???',u'????',u'???',u'????',u'?????',u'????',u'???',u'????'] return random.sample(names,1)[0].encode('utf-8') note: runsql executing the query on the sqlite database fabricate_hebrew_name() should return a string which could be used in my SQL query. any help is much appreciated.

    Read the article

  • String codification to Twitter

    - by Miguel Ribeiro
    I'm developing a program that sends tweets. I have this piece of code: StringBuilder sb = new StringBuilder("Recomendo "); sb.append(lblName.getText()); sb.append(" no canal "+lblCanal.getText()); sb.append(" no dia "+date[2]+"/"+date[1]+"/"+date[0]); sb.append(" às "+time[0]+"h"+time[1]); byte[] defaultStrBytes = sb.toString().getBytes("ISO-8859-1"); String encodedString = new String(defaultStrBytes, "UTF-8"); But When I send it to tweet I get the "?" symbol or other strage characters because of the accents like "à" . I've also tried with only String encodedString = new String(sb.toString().getBytes(), "UTF-8"); //also tried with ISO-8859-1 but the problem remains...

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • Inserting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • Https in java ends up with strange results

    - by Senne
    I'm trying to illustrate to students how https is used in java. But i have the feeling my example is not really the best out there... The code works well on my windows 7: I start the server, go to https://localhost:8080/somefile.txt and i get asked to trust the certificate, and all goes well. When I try over http (before or after accepting the certificate) I just get a blank page, which is ok for me. BUT when I try the exact same thing on my windows XP: Same thing, all goes well. But then (after accepting the certificate first), I'm also able to get all the the files through http! (if I first try http before https followed by accepting the certificate, I get no answer..) I tried refreshing, hard refreshing a million times but this should not be working, right? Is there something wrong in my code? I'm not sure if I use the right approach to implement https here... package Security; import java.io.*; import java.net.*; import java.util.*; import java.util.concurrent.Executors; import java.security.*; import javax.net.ssl.*; import com.sun.net.httpserver.*; public class HTTPSServer { public static void main(String[] args) throws IOException { InetSocketAddress addr = new InetSocketAddress(8080); HttpsServer server = HttpsServer.create(addr, 0); try { System.out.println("\nInitializing context ...\n"); KeyStore ks = KeyStore.getInstance("JKS"); char[] password = "vwpolo".toCharArray(); ks.load(new FileInputStream("myKeys"), password); KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); kmf.init(ks, password); SSLContext sslContext = SSLContext.getInstance("TLS"); sslContext.init(kmf.getKeyManagers(), null, null); // a HTTPS server must have a configurator for the SSL connections. server.setHttpsConfigurator (new HttpsConfigurator(sslContext) { // override configure to change default configuration. public void configure (HttpsParameters params) { try { // get SSL context for this configurator SSLContext c = getSSLContext(); // get the default settings for this SSL context SSLParameters sslparams = c.getDefaultSSLParameters(); // set parameters for the HTTPS connection. params.setNeedClientAuth(true); params.setSSLParameters(sslparams); System.out.println("SSL context created ...\n"); } catch(Exception e2) { System.out.println("Invalid parameter ...\n"); e2.printStackTrace(); } } }); } catch(Exception e1) { e1.printStackTrace(); } server.createContext("/", new MyHandler1()); server.setExecutor(Executors.newCachedThreadPool()); server.start(); System.out.println("Server is listening on port 8080 ...\n"); } } class MyHandler implements HttpHandler { public void handle(HttpExchange exchange) throws IOException { String requestMethod = exchange.getRequestMethod(); if (requestMethod.equalsIgnoreCase("GET")) { Headers responseHeaders = exchange.getResponseHeaders(); responseHeaders.set("Content-Type", "text/plain"); exchange.sendResponseHeaders(200, 0); OutputStream responseBody = exchange.getResponseBody(); String response = "HTTP headers included in your request:\n\n"; responseBody.write(response.getBytes()); Headers requestHeaders = exchange.getRequestHeaders(); Set<String> keySet = requestHeaders.keySet(); Iterator<String> iter = keySet.iterator(); while (iter.hasNext()) { String key = iter.next(); List values = requestHeaders.get(key); response = key + " = " + values.toString() + "\n"; responseBody.write(response.getBytes()); System.out.print(response); } response = "\nHTTP request body: "; responseBody.write(response.getBytes()); InputStream requestBody = exchange.getRequestBody(); byte[] buffer = new byte[256]; if(requestBody.read(buffer) > 0) { responseBody.write(buffer); } else { responseBody.write("empty.".getBytes()); } URI requestURI = exchange.getRequestURI(); String file = requestURI.getPath().substring(1); response = "\n\nFile requested = " + file + "\n\n"; responseBody.write(response.getBytes()); responseBody.flush(); System.out.print(response); Scanner source = new Scanner(new File(file)); String text; while (source.hasNext()) { text = source.nextLine() + "\n"; responseBody.write(text.getBytes()); } source.close(); responseBody.close(); exchange.close(); } } }

    Read the article

  • SHA-256 encryption wrong result in Android

    - by user642966
    I am trying to encrypt 12345 using 1111 as salt using SHA-256 encoding and the answer I get is: 010def5ed854d162aa19309479f3ca44dc7563232ff072d1c87bd85943d0e930 which is not same as the value returned by this site: http://hash.online-convert.com/sha256-generator Here's the code snippet: public String getHashValue(String entity, String salt){ byte[] hashValue = null; try { MessageDigest digest = MessageDigest.getInstance("SHA-256"); digest.update(entity.getBytes("UTF-8")); digest.update(salt.getBytes("UTF-8")); hashValue = digest.digest(); } catch (NoSuchAlgorithmException e) { Log.i(TAG, "Exception "+e.getMessage()); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return BasicUtil.byteArrayToHexString(hashValue); } I have verified my printing method with a sample from SO and result is fine. Can someone tell me what's wrong here? And just to clarify - when I encrypt same value & salt in iOS code, the returned value is same as the value given by the converting site.

    Read the article

  • Translate from Java to C#: simple code to re-encode a string

    - by Dr. Zim
    We were sent this formula to encrypt a string written in Java: String myInput = "test1234"; MessageDigest md = MessageDigest.getInstance("SHA"); byte[] myD = md.digest(myInput.getBytes()); BASE64Encoder en64 = new BASE64Encoder(); String myOutput = new String ( Java.net.URLEncoder.encode( en64.encode(myD))); // myOutput becomes "F009U%2Bx99bVTGwS3cQdHf%2BJcpCo%3D" Our attempt at writing this in C# is: System.Security.Cryptography.SHA1 sha1 = new System.Security.Cryptography.SHA1CryptoServiceProvider(); string myOutput = HttpUtility.UrlEncode( Convert.ToBase64String( sha1.ComputeHash( ASCIIEncoding.Default.GetBytes(myInput)))); However the output is no where near the same. It doesn't even have percent signs in it. Any chance anyone would know where we are going wrong?

    Read the article

  • alternative to check, whether a value is in a set

    - by stanleyxu2005
    Hi All, I have the following code. It looks ugly, if the value equals to one of the following value then do something. var Value: Word; begin Value := 30000; if (Value = 30000) or (Value = 40000) or (Value = 1) then do_something; end; I want to refactor the code as follows: var Value: Word; begin Value := 30000; if (Value in [1, 30000, 40000]) then // Does not work do_something; end; However, the refactored code does not work. I assume that a valid set in Delphi accepts only elements with type byte. If there any good alternative to refactor my original code (besides using case)?

    Read the article

  • C#: how to update winform from thread?

    - by JackN
    A C# thread (Read()) causes System.NotSupportedException when it tries to update a winform based on received content. The full error message is Read() System.NotSupportedException: An error message cannot be displayed because an optional resource assembly containing it cannot be found at Microsoft.AGL.Common.MISC.HandelAr() at System.Windows.Forms.ProgressBar._SetInfo() at System.Windows.Forms.ProgressBar.set_Value() at ...ProcessStatus() at ...Read() The Build/Target Environment is: Microsoft.NET\SDK\CompactFramework\v2.0\WindowsCE. Is the problem writing to the ProgressBar from a Thread? If so, what is the correct C#/winforms method to update a ProgressBar from a Thread? In this application the Read() Thread is continuous: it is started when the application starts and runs forever. void ProcessStatus(byte[] status) { Status.Speed = status[5]; var Speed = Status.Speed/GEAR_RATIO; Status.Speed = (int) Speed; progressBarSpeed.Value = Status.Speed; ...

    Read the article

  • Optimize C# Code Fragment

    - by Eric J.
    I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method? Changing the input parameter to p to ulong[] would create a macro inefficiency. static ulong Fetch64(byte[] p, int ofs = 0) { unchecked { ulong result = p[0 + ofs] + ((ulong)p[1 + ofs] << 8) + ((ulong)p[2 + ofs] << 16) + ((ulong)p[3 + ofs] << 24) + ((ulong)p[4 + ofs] << 32) + ((ulong)p[5 + ofs] << 40) + ((ulong)p[6 + ofs] << 48) + ((ulong)p[7 + ofs] << 56); return result; } }

    Read the article

  • Which header files are necessary to run this code snippet?

    - by httpinterpret
    It's from here,but fails when compiling: int main(int argc, char **argv) { struct hostent { char *h_name; // main name char **h_aliases; // alternative names (aliases) int h_addrtype; // address type (usually AF_INET) int h_length; // length of address (in octets) char **h_addr_list; // alternate addresses (in Network Byte Order) }; #define h_addr h_addr_list[0] // First address of h_addr_list. struct hostent *info_stackoverflow; int i = 0; info_stackoverflow = gethostbyname( "www.stackoverflow.com" ); printf("The IP address of %s is %s", info_stackoverflow->h_name, inet_ntoa( * ((struct in_addr *)info_stackoverflow->h_addr ))); /* aliases */ while( *(pc_ip->h_aliases + i) != NULL ) { printf("\n\tAlias: %s", *(pc_ip->h_aliases + i) ); i++; } }

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >