Search Results

Search found 3449 results on 138 pages for 'tranquil byte'.

Page 116/138 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Using RecordStore in Java J2ME

    - by me123
    Hi, I am currently doing some J2ME development. I am having a problem in that a user can add and remove elements to the record store, and if a record gets deleted, then that record is left empty and the others don't move up one. I'm trying to come up with a loop that will check if a record has anything in it (incase it has been deleted) and if it does then I want to add the contents of that record to a list. My code is similar to as follows: for (int i = 1; i <= rs.getNumRecords(); i++) { // Re-allocate if necessary if (rs.getRecordSize(i) > recData.length) recData = new byte[rs.getRecordSize(i)]; len = rs.getRecord(i, recData, 0); st = new String(recData, 0, len); System.out.println("Record #" + i + ": " + new String(recData, 0, len)); System.out.println("------------------------------"); if(st != null) { list.insert(i-1, st, null); } } When it gets to rs.getRecordSize(i), I always get a "javax.microedition.rms.InvalidRecordIDException: error finding record". I know this is due to the record being empty but I can't think of a way to get around this problem. Any help would be much appreciated. Thanks in advance.

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Where can I find a quick reference for standard Basic?

    - by Steve314
    The reason? Pure nostalgia. Anyway, there was a standard for Basic that was published in the late 80s or early 90s. It was probably ISO/IEC 10279:1991, but I don't have access to that and cannot be sure. Whatever this standard was, some of the syntax made its way into Borlands Turbo Basic and Microsofts Visual Basic. I never learned any significant amount of VB, but Turbo Basic is one of those things I played with in my mis-spent youth. At one time, my main reference was an article published in one of the main programming periodicals - maybe Personal Computer World, maybe Byte. A scan of that article (if anyone can even identify it) would be great, but all I really want is a few pages quick reference of that standard syntax. Must be free (I'm not that nostalgic), but it must describe the standard syntax - the whole point is to sort out what is standard as opposed to VB or whatever. EDIT The more I think about this, the more convinced I am that this standard was available around 1987 or 1988. Maybe it was the earlier non-full version of the standard above, or maybe it was pre-acceptance of the standard.

    Read the article

  • No feedback from Socket.SendAsync

    - by BowserKingKoopa
    I'm creating a socket and I'm trying to send data through it using SendAsync. My socket isn't connected to anything so I expected to get an error of some sort. However I get nothing. I get no indication that the send didn't work. If I use the synchronous Send method instead of the asynchronous SendAsync method I get an Exception stating that the socket isn't connected to anything. That makes sense to me. When using SendAsync the completed event doesn't ever fire and I get no indication that the send didn't work. So basically my question is how can I tell when SendAsync fails? Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); SocketAsyncEventArgs args = new SocketAsyncEventArgs(); args.SetBuffer(new byte[0], 0, 0); args.Completed += delegate(object sender, SocketAsyncEventArgs e) { Debug.WriteLine("async send complete"); Debug.WriteLine("SOCKET ERROR: " + e.SocketError); }; bool completedSynchronously = socket.SendAsync(args); if (completedSynchronously) { Debug.WriteLine("sync send complete"); Debug.WriteLine("socket error: " + args.SocketError); }

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • How to Display a Bmp in a RTF control in VB.net

    - by Gerolkae
    I Started with this C# Question I'm trying to Display a bmp image inside a rtf Box for a Bot program I'm making. This function is supposed to convert a bitmap to rtf code whis is inserted to another rtf formatter srtring with additional text. Kind of like Smilies being used in a chat program. For some reason the output of this function gets rejected by the RTF Box and Vanishes completly. I'm not sure if it the way I'm converting the bmp to a Binary string or if its tied in with the header tags 'returns the RTF string representation of our picture Public Shared Function PictureToRTF(ByVal Bmp As Bitmap) As String Dim stream As New MemoryStream() Bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp) Dim bytes As Byte() = stream.ToArray() Dim str As String = BitConverter.ToString(bytes, 0).Replace("-", String.Empty) 'header to string we want to insert Using g As Graphics = Main.CreateGraphics() xDpi = g.DpiX yDpi = g.DpiY End Using Dim _rtf As New StringBuilder() ' Calculate the current width of the image in (0.01)mm Dim picw As Integer = CInt(Math.Round((Bmp.Width / xDpi) * HMM_PER_INCH)) ' Calculate the current height of the image in (0.01)mm Dim pich As Integer = CInt(Math.Round((Bmp.Height / yDpi) * HMM_PER_INCH)) ' Calculate the target width of the image in twips Dim picwgoal As Integer = CInt(Math.Round((Bmp.Width / xDpi) * TWIPS_PER_INCH)) ' Calculate the target height of the image in twips Dim pichgoal As Integer = CInt(Math.Round((Bmp.Height / yDpi) * TWIPS_PER_INCH)) ' Append values to RTF string _rtf.Append("{\pict\wbitmap0") _rtf.Append("\picw") _rtf.Append(Bmp.Width.ToString) ' _rtf.Append(picw.ToString) _rtf.Append("\pich") _rtf.Append(Bmp.Height.ToString) ' _rtf.Append(pich.ToString) _rtf.Append("\wbmbitspixel24\wbmplanes1") _rtf.Append("\wbmwidthbytes40") _rtf.Append("\picwgoal") _rtf.Append(picwgoal.ToString) _rtf.Append("\pichgoal") _rtf.Append(pichgoal.ToString) _rtf.Append("\bin ") _rtf.Append(str.ToLower & "}") Return _rtf.ToString End Function

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

  • Looking for streaming xml pretty printer in C/C++ using expat or libxml2

    - by Mark Zeren
    I'm looking for a streaming xml pretty printer for C/C++ that's either self contained or that uses libxml2 or expat. I've searched a bit and not found one. It seems like something that would be generally useful. Am I missing an obvious tool that does this? Background: I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • Reading UTF-8 XML and writing it to a file with Python

    - by Harri
    I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding. My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level) Here's the XML (from the parts that matter at least, it's a *.resx file which contains UI strings) <?xml version="1.0" encoding="utf-8"?> <root> <resheader name="foo"> <value>bar</value> </resheader> <data name="lorem" xml:space="preserve"> <value>ipsum öä</value> </data> </root> And here's my python script from xml.dom.minidom import parse names = [] values = [] def getStrings(path): dom = parse(path) data = dom.getElementsByTagName("data") for i in range(len(data)): name = data[i].getAttribute("name") names.append(name) value = data[i].getElementsByTagName("value") values.append(value[0].firstChild.nodeValue.encode("utf-8")) def writeToFile(): with open("uiStrings-fi.py", "w") as f: for i in range(len(names)): line = names[i] + '="'+ values[i] + '"' #varName='varValue' f.write(line) f.write("\n") getStrings("ResourceFile.fi-FI.resx") writeToFile() And here's the traceback: Traceback (most recent call last): File "GenerateLanguageFiles.py", line 24, in writeToFile() File "GenerateLanguageFiles.py", line 19, in writeToFile line = names[i] + '="'+ values[i] + '"' #varName='varValue' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran ge(128) How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • Q on Python serialization/deserialization

    - by neil
    What chances do I have to instantiate, keep and serialize/deserialize to/from binary data Python classes reflecting this pattern (adopted from RFC 2246 [TLS]): enum { apple, orange } VariantTag; struct { uint16 number; opaque string<0..10>; /* variable length */ } V1; struct { uint32 number; opaque string[10]; /* fixed length */ } V2; struct { select (VariantTag) { /* value of selector is implicit */ case apple: V1; /* VariantBody, tag = apple */ case orange: V2; /* VariantBody, tag = orange */ } variant_body; /* optional label on variant */ } VariantRecord; Basically I would have to define a (variant) class VariantRecord, which varies depending on the value of VariantTag. That's not that difficult. The challenge is to find a most generic way to build a class, which serializes/deserializes to and from a byte stream... Pickle, Google protocol buffer, marshal is all not an option. I made little success with having an explicit "def serialize" in my class, but I'm not very happy with it, because it's not generic enough. I hope I could express the problem. My current solution in case VariantTag = apple would look like this, but I don't like it too much import binascii import struct class VariantRecord(object): def __init__(self, number, opaque): self.number = number self.opaque = opaque def serialize(self): out = struct.pack('>HB%ds' % len(self.opaque), self.number, len(self.opaque), self.opaque) return out v = VariantRecord(10, 'Hello') print binascii.hexlify(v.serialize()) >> 000a0548656c6c6f Regards

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Merging two Regular Expressions to Truncate Words in Strings

    - by Alix Axel
    I'm trying to come up with the following function that truncates string to whole words (if possible, otherwise it should truncate to chars): function Text_Truncate($string, $limit, $more = '...') { $string = trim(html_entity_decode($string, ENT_QUOTES, 'UTF-8')); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)~su', '$1', $string); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{' . intval($limit) . '}).*~su', '$1', $string); } $string .= $more; } return trim(htmlentities($string, ENT_QUOTES, 'UTF-8', true)); } Here are some tests: // Iñtërnâtiônàlizætiøn and then the quick brown fox... (49 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn and then the quick brown fox jumped overly the lazy dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); // Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_... (50 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); They both work as it is, however if I drop the second preg_replace() I get the following: Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.... I can't use substr() because it only works on byte level and I don't have access to mb_substr() ATM, I've made several attempts to join the second regex with the first one but without success. Please help S.M.S., I've been struggling with this for almost an hour. EDIT: I'm sorry, I've been awake for 40 hours and I shamelessly missed this: $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)?~su', '$1', $string); Still, if someone has a more optimized regex (or one that ignores the trailing space) please share: "Iñtërnâtiônàlizætiøn and then " "Iñtërnâtiônàlizætiøn_and_then_" EDIT 2: I still can't get rid of the trailing whitespace, can someone help me out?

    Read the article

  • How to log in to craigslist using c#

    - by kosikiza
    i m using the following code to log in to craigslist, but haven't succeeded yet. string formParams = string.Format("inputEmailHandle={0}&inputPassword={1}", "[email protected]", "removed"); //string postData = "[email protected]&inputPassword=removed"; string uri = "https://accounts.craigslist.org/"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); request.KeepAlive = true; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; byte[] postBytes = Encoding.ASCII.GetBytes(formParams); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); cookyHeader = response.Headers["Set-cookie"]; string pageSource; string getUrl = "https://post.craigslist.org/del"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookyHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); }

    Read the article

  • C# Regex stops after first line matched

    - by JD Guzman
    Ok so I have a regex and I need it to find matches in a multiline string. This is the string I am using: Device Identifier: disk0 Device Node: /dev/disk0 Part of Whole: disk0 Device / Media Name: OCZ-VERTEX2 Media Volume Name: Not applicable (no file system) Mounted: Not applicable (no file system) File System: None Content (IOContent): GUID_partition_scheme OS Can Be Installed: No Media Type: Generic Protocol: SATA SMART Status: Verified Total Size: 240.1 GB (240057409536 Bytes) (exactly 468862128 512-Byte-Blocks) Volume Free Space: Not applicable (no file system) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Not applicable (no file system) Ejectable: No Whole: Yes Internal: Yes Solid State: Yes OS 9 Drivers: No Low Level Format: Not supported Basically I need to separate each line into two groups with the colon as the separator. The regex I am using is: @"([A-Za-z0-9\(\) \-\/]+):([A-Za-z0-9\(\) \-\/]+).*" It does work but only picks up the first line and separates it into the two groups like I want but it stops at that point. I have tried the Multiline option but it doesn't make any difference. I must admit I am new to the regex world. Any help is appreciated.

    Read the article

  • Bit shift and pointer oddities in C, looking for explanations

    - by foo
    Hi all, I discovered something odd that I can't explain. If someone here can see what or why this is happening I'd like to know. What I'm doing is taking an unsigned short containing 12 bits aligned high like this: 1111 1111 1111 0000 I then want to shif the bits so that each byte in the short hold 7bits with the MSB as a pad. The result on what's presented above should look like this: 0111 1111 0111 1100 What I have done is this: unsigned short buf = 0xfff; //align high buf <<= 4; buf >>= 1; *((char*)&buf) >>= 1; This gives me something like looks like it's correct but the result of the last shift leaves the bit set like this: 0111 1111 1111 1100 Very odd. If I use an unsigned char as a temporary storage and shift that then it works, like this: unsigned short buf = 0xfff; buf <<= 4; buf >>= 1; tmp = *((char*)&buf); *((char*)&buf) = tmp >> 1; The result of this is: 0111 1111 0111 1100 Any ideas what is going on here?

    Read the article

  • Java: Cipher package (encrypt and decrypt). invalid key error

    - by noinflection
    Hello folks, i am doing a class with static methods to encrypt and decrypt a message using javax.crypto. I have 2 static methods that use ecipher and dcipher in order to do what they are supossed to do i need to initialize some variables (which are static also). But when i try to use it i get InvalidKeyException with the parameters i give to ecipher.init(...). I can't find why. Here is the code: private static byte[] raw = {-31, 17, 7, -34, 59, -61, -60, -16, 26, 87, -35, 114, 0, -53, 99, -116, -82, -122, 68, 47, -3, -17, -21, -82, -50, 126, 119, -106, -119, -5, 109, 98}; private static SecretKeySpec skeySpec; private static Cipher ecipher; private static Cipher dcipher; static { try { skeySpec = new SecretKeySpec(raw, "AES"); // Instantiate the cipher ecipher = Cipher.getInstance("AES"); dcipher = Cipher.getInstance("AES"); ecipher.init(Cipher.ENCRYPT_MODE, skeySpec); dcipher.init(Cipher.DECRYPT_MODE, skeySpec); } catch (NoSuchAlgorithmException e) { throw new UnhandledException("No existe el algoritmo deseado", e); } catch (NoSuchPaddingException e) { throw new UnhandledException("No existe el padding deseado", e); } catch (InvalidKeyException e) { throw new UnhandledException("Clave invalida", e); } }

    Read the article

  • SEO: A whois server that work for .SE domains?

    - by Niels Bosma
    I'm developing a small domain checker and I can't get .SE to work: public string Lookup(string domain, RecordType recordType, SeoToolsSettings.Tld tld) { TcpClient tcp = new TcpClient(); tcp.Connect(tld.WhoIsServer, 43); string strDomain = recordType.ToString() + " " + domain + "\r\n"; byte[] bytDomain = Encoding.ASCII.GetBytes(strDomain.ToCharArray()); Stream s = tcp.GetStream(); s.Write(bytDomain, 0, strDomain.Length); StreamReader sr = new StreamReader(tcp.GetStream(), Encoding.ASCII); string strLine = ""; StringBuilder builder = new StringBuilder(); while (null != (strLine = sr.ReadLine())) { builder.AppendLine(strLine); } tcp.Close(); if (tld.WhoIsDelayMs > 0) System.Threading.Thread.Sleep(tld.WhoIsDelayMs); return builder.ToString(); } I've tried whois servers whois.nic-se.se and whois.iis.se put I keep getting: # Copyright (c) 1997- .SE (The Internet Infrastructure Foundation). # All rights reserved. # The information obtained through searches, or otherwise, is protected # by the Swedish Copyright Act (1960:729) and international conventions. # It is also subject to database protection according to the Swedish # Copyright Act. # Any use of this material to target advertising or # similar activities is forbidden and will be prosecuted. # If any of the information below is transferred to a third # party, it must be done in its entirety. This server must # not be used as a backend for a search engine. # Result of search for registered domain names under # the .SE top level domain. # The data is in the UTF-8 character set and the result is # printed with eight bits. "domain google.se" not found. Edit: I've tried changing to UTF8 with no other result. When I try using whois from sysinternals I get the correct result, but not with my code, not even using SE.whois-servers.net. /Niels

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Display last picture

    - by steve
    Hi I'm inserting an image from the camera (Taking a picture) into the MediaStore.Images.Media datastore. Does anyone know how I can go about displaying the last picture taken? I used Uri image = ContentUris.withAppendedId(externalContentUri, 45); to display an image from the datastore but obviously 45 is not the correct image. I try to pass the information from the previous activity (Camera) to the display activity but I'm assuming due to the photo call back being its own thread the value never gets set. Photo code is as follows Camera.PictureCallback photoCallback = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub FileOutputStream fos; try { Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length); fileUrl = MediaStore.Images.Media.insertImage(getContentResolver(), bm, "LastTaken", "Picture"); if(fileUrl == null) { Log.d("Still", "Image Insert Failed"); return; } else { picUri = Uri.parse(fileUrl); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, picUri)); } } catch(Exception e) { Log.d("Picture", "Error Picture: ", e); } camera.startPreview(); } };

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • Why is my GZipStream not writeable?

    - by Ozzah
    I have some GZ compressed resources in my program and I need to be able to write them out to temporary files for use. I wrote the following function to write the files out and return true on success or false on failure. In addition, I've put a try/catch in there which shows a MessageBox in the event of an error: private static bool extractCompressedResource(byte[] resource, string path) { try { using (MemoryStream ms = new MemoryStream(resource)) { using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.ReadWrite)) { using (GZipStream zs = new GZipStream(fs, CompressionMode.Decompress)) { ms.CopyTo(zs); // Throws exception zs.Close(); ms.Close(); } } } } catch (Exception ex) { MessageBox.Show(ex.Message); // Stream is not writeable return false; } return true; } I've put a comment on the line which throws the exception. If I put a breakpoint on that line and take a look inside the GZipStream then I can see that it's not writeable (which is what's causing the problem). Am I doing something wrong, or is this a limitation of the GZipStream class?

    Read the article

  • Create image from scratch with JMagick

    - by Michael IV
    I am using Java port of ImageMagick called JMagick .I need to be able to create a new image and write an arbitrary text chunk into it.The docs are very poor and what I managed to get so far is to write text into the image which comes from IO.Also , in all the examples I have found it seems like the very first operation ,before writing new image data , is always loading of an existing image into ImageInfo instance.How do I create an image from scratch with JMagick and then write a text into it? Here is what I do now : try { ImageInfo info = new ImageInfo(); info.setSize("512x512"); info.setUnits(ResolutionType.PixelsPerInchResolution); info.setColorspace(ColorspaceType.RGBColorspace); info.setBorderColor(PixelPacket.queryColorDatabase("red")); info.setDepth(8); BufferedImage img = new BufferedImage(512,512,BufferedImage.TYPE_4BYTE_ABGR); byte[] imageBytes = ((DataBufferByte) img.getData().getDataBuffer()).getData(); MagickImage mimage = new MagickImage(info,imageBytes); DrawInfo aInfo = new DrawInfo(info); aInfo.setFill(PixelPacket.queryColorDatabase("green")); aInfo.setUnderColor(PixelPacket.queryColorDatabase("yellow")); aInfo.setOpacity(0); aInfo.setPointsize(36); aInfo.setFont("Arial"); aInfo.setTextAntialias(true); aInfo.setText("JMagick Tutorial"); aInfo.setGeometry("+40+40"); mimage.annotateImage(aInfo); mimage.setFileName("text.jpg"); mimage.writeImage(info); } catch (MagickException ex) { Logger.getLogger(LWJGL_IDOMOO_SIMPLE_TEST.class.getName()).log(Level.SEVERE, null, ex); } It doesn't work , the JVM crashes with access violation as it probably expects for the input image from IO.

    Read the article

  • UTF-8 to Unicode conversion

    - by sandeep
    Hi, I am having problems with converting UTF-8 to Unicode. Below is the code: int charset_convert( char * string, char * to_string,char* charset_from, char* charset_to) { char *from_buf, *to_buf, *pointer; size_t inbytesleft, outbytesleft, ret; size_t TotalLen; iconv_t cd; if (!charset_from || !charset_to || !string) /* sanity check */ return -1; if (strlen(string) < 1) return 0; /* we are done, nothing to convert */ cd = iconv_open(charset_to, charset_from); /* Did I succeed in getting a conversion descriptor ? */ if (cd == (iconv_t)(-1)) { /* I guess not */ printf("Failed to convert string from %s to %s ", charset_from, charset_to); return -1; } from_buf = string; inbytesleft = strlen(string); /* allocate max sized buffer, assuming target encoding may be 4 byte unicode */ outbytesleft = inbytesleft *4 ; pointer = to_buf = (char *)malloc(outbytesleft); memset(to_buf,0,outbytesleft); memset(pointer,0,outbytesleft); ret = iconv(cd, &from_buf, &inbytesleft, &pointer, &outbytesleft);ing memcpy(to_string,to_buf,(pointer-to_buf); } main(): int main() { char UTF []= {'A', 'B'}; char Unicode[1024]= {0}; char* ptr; int x=0; iconv_t cd; charset_convert(UTF,Unicode,"UTF-8","UNICODE"); ptr = Unicode; while(*ptr != '\0') { printf("Unicode %x \n",*ptr); ptr++; } return 0; } It should give A and B but i am getting: ffffffff fffffffe 41 Thanks, Sandeep

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >