Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 252/376 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Unpacking Argument Lists and Instantiating WTForms objects from web.py

    - by Morris Cornell-Morgan
    After a bit of searching, I've found that it's possible to instantiate a WTForms object in web.py using the following code: form = my_form(**web.input()) web.input() returns a "dictionary-like" web.storage object, but without the double asterisks WTForms will raise an exception: TypeError: formdata should be a multidict-type wrapper that supports the 'getlist' method From the Python documentation I understand that the two asterisks are used to unpack a dictionary of named arguments. That said, I'm still a bit confused about exactly what is going on. What makes the web.storage object returned by web.input() "dictionary-like" enough that it can be unpacked by ** but not "dictionary-like" enough that it can be passed as-is to the WTForms constructor? I know that this is an extremely basic question, but any advice to help a novice programmer would be greatly appreciated!

    Read the article

  • Some special characters defined in "ISO-8859-1" can't be shown when encoding with "UTF-8"

    - by Mike.Huang
    I need to get a string from URL request of brower, and then create a text image by requested text. I know the default encoding of the Java net transmission is "ISO-8859-1", it can works normally with all characters what defined in "ISO-8859-1". But when I request a multi-byte Unicode character (e.g. chinese or something like ¤?), then I need to decode it by "UTF-8" from "ISO-8859-1". My codes like: String reslut = new String(requestString.getBytes("ISO-8859-1"), "UTF-8"); Everything is fine, but I found some characters in ISO-8859-1 are not been shown now, which characters are 0x80 - 0xFF(defined in" ISO-8859-1"), i.e. the characters after 0x80 (in "ISO-8859-1") not been shown when converted to "UTF-8" from "ISO-8859-1". Any other method can solve this query?

    Read the article

  • SQL Select Permissions

    - by Brandi
    I have a database that I need to connect to and select from. I have an SQL Login, let's call it myusername. When I use the following, no SELECT permission shows up: SELECT * FROM fn_my_permissions ('dbo.mytable', 'OBJECT') GO Several times I tried things like: USE mydatabase GO GRANT SELECT TO myusername GO GRANT SELECT ON DATABASE::mydatabase TO myusername GO GRANT SELECT ON mytable TO myusername GO It says the queries execute successfully, but there is never any difference in the first query. What simple thing am I missing to grant database level select permissions. As a note, I made double sure it was the correct user, correct database, and I have already tried granting table level select permissions. So far I keep getting the error: SELECT permission denied on object 'mytable', database 'mydatabase', schema 'dbo'. Any ideas what I'm missing? Thanks in advance.

    Read the article

  • How was non-decimal money represented in software?

    - by dan04
    A lot of the answers to the questions about the accuracy of float and double recommend the use of decimal for monetary amounts. This works because today all currencies are decimal except MGA and MRO, and those have subunits of 1/5 so are still decimal-friendly. But what about the software used in U.S. stock markets when prices were in 1/16ths of dollar? The accuracy of binary data types wouldn't have been an issue, right? Going further back, how did pre-1971 British accounting software deal with pounds, shillings, and pence? Did their versions of COBOL have a special PIC clause for it? Were all amounts stored in pence? How was decimalisation handled?

    Read the article

  • Robots Crawling Across Namespace?

    - by Codex73
    I migrated site from one domain to another. Also placed permanent redirection on old account. My stats logs are capturing this: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) /libro_metaboforte_chap5.php/members/members/file_chap6.php I placed this on robots which wasn't present at time of migration. Robots.txt Contents User-agent: * Allow: / Disallow: /members/ Disallow: /includes/ HTACCESS FILE CONTENTS DirectoryIndex index.php index.html Options +FollowSymlinks RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/store/?$ RewriteCond %{QUERY_STRING} !. RewriteRule ^.+/?$ index.php [QSA,L] RewriteCond %{QUERY_STRING} ^curlang=([a-z]*)$ RewriteRule ^.+/?$ index.php? [QSA,L] Will continue to log incoming bot captures. My htaccess does rewrite. I just added the robot file. The funny part is that is stepping in double directories... I don't know if the problem was not having the 'robots.txt' in place or the actual in place htaccess doing rewrites?

    Read the article

  • [OpenGL] I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Then how do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • How to check if two types can be compared, summed etc.?

    - by Marcus
    Hi, if given two types (Type a, Type b), is there any "nice" way to find out if those two can be compared, summed etc.? I was thinking if the types implement IConvertible, one could convert both to lets say decimal and perform a "Convert.ToDecimal(a) > Convert.ToDecimal(b)" ? I am building an expression evaluator and want to be able to work with any kind of object and thus need to know if a type can be compared to another type (it DOESN'T have to be the same types on both sides. eg. double > int)

    Read the article

  • Classloader problems Tomcat 6 javagent

    - by alecswan
    I am using Salve Dependency Injection library that instruments the byte code of the web application. I specified -javaagent in Tomcat VM options and pointed it to the Salve agent jar. The agent jar gets loaded, but then it throws a java.lang.NoClassDefFoundError unable to find classes that are in other Salve jars which are located in WEB-INF/lib folder of my web app. I can solve this problem by putting those JARs in Tomcat/endorsed folder. However, some of those jars depend on third-party libraries, such as Spring and servlet-api.jar. Therefore, I am forced to put all these dependencies in Tomcat/endorsed as well. Could anybody suggest a better way for handling dependencies of a Tomcat javaagent? Thanks.

    Read the article

  • Microcontroller to Microcontroller SPI communication

    - by onaclov2000
    Hello again, I was doing some reading and have even gotten a "master" SPI working on my microcontroller. Here is my question, basically if the master wants to initialize a write to the slave we write to the SSPBUF, how do we control what the slave responds with? The datasheet doesn't seem really clear to me the order of events in that case. I.E. Master puts a char into the SSPBUF, this initiates the SPI module to send data to the slave, during the shift, the slave returns a byte. In the slave side, is there something that tells you you have incoming data, and you can write to your SSPBUF first, THEN accept the data? OR Do you have to write to the SSPBUF the first "return value" you want sent back before the master can have an opportunity to initiate a transfer?

    Read the article

  • regex to match postgresql bytea

    - by filiprem
    In PostgreSQL, there is a BLOB datatype called bytea. It's just an array of bytes. bytea literals are output in the following way: '\\037\\213\\010\\010\\005`Us\\000\\0001.fp3\'\\223\\222%' See PostgreSQL docs for full definition of the format. I'm trying to construct a Perl regular expression which will match any such string. It should also match standard ANSI SQL string literals, like 'Joe', 'Joe''s Mom', 'Fish Called ''Wendy''' It should also match backslash-escaped variant: 'Joe\'s Mom', . First aproach (shown below) works only for some bytea representations. s{ ' # Opening apostrophe (?: # Start group [^\\\'] # Anything but a backslash or an apostrophe | # or \\ . # Backslash and anything | # or \'\' # Double apostrophe )* # End of group ' # Closing apostrophe }{LITERAL_REPLACED}xgo; For other (longer ones, with many escaped apostrophes, Perl gives such warning: Complex regular subexpression recursion limit (32766) exceeded at ./sqa.pl line 33, < line 1. So I am looking for a better (but still regex-based) solution, it probably requires some regex alchemy (avoiding backreferences and all).

    Read the article

  • question about fgets

    - by user105033
    Is this safe to do? (does fgets terminate the buffer with null) or should I be setting the 20th byte to null after the call to fgets before i call clean. // strip new lines void clean(char *data) { while (*data) { if (*data == '\n' || *data == '\r') *data = '\0'; data++; } } // for this, assume that the file contains 1 line no longer than 19 bytes // buffer is freed elsewhere char *load_latest_info(char *file) { FILE *f; char *buffer = (char*) malloc(20); if (f = fopen(file, "r")) if (fgets(buffer, 20, f)) { clean(buffer); return buffer; } free(buffer); return NULL; }

    Read the article

  • C++ min heap with user-defined type.

    - by bsg
    Hi, I am trying to implement a min heap in c++ for a struct type that I created. I created a vector of the type, but it crashed when I used make_heap on it, which is understandable because it doesn't know how to compare the items in the heap. How do I create a min-heap (that is, the top element is always the smallest one in the heap) for a struct type? The struct is below: struct DOC{ int docid; double rank; }; I want to compare the DOC structures using the rank member. How would I do this? I tried using a priority queue with a comparator class, but that also crashed, and it also seems silly to use a data structure which uses a heap as its underlying basis when what I really need is a heap anyway. Thank you very much, bsg

    Read the article

  • Playing around with Eclipse features - Project files are now hidden?

    - by Daddy Warbox
    I don't even remember how, but somehow I managed to make all of my project's source files hidden in Eclipse's Package and Project Explorer panels. Go figure. 'Show Filtered Children (alt+click)' temporarily reveals the files, and only in Package Explorer can I double-click to reopen them from this view. They go back into hiding after I select another item, though. Plus, now I'm getting other annoyances, such as all of the folded non-hidden trees altogether expanding when I click on any item, and the entire file folder tree of my project now being shown in these panels (including my .svn subversion folders... which shouldn't be any of Eclipse's business, presently). Long story short, my Package/Project Explorers' just blew up on me, and I want to know how to fix this. Thanks in advance. P.S. What's a good guide I can use to learn my way around this silly contraption, anyway?

    Read the article

  • Hex to Decimal conversion in C

    - by darkie15
    Hi All, Here is my code which is doing the conversion from hex to decimal. The hex values are stored in a unsigned char array: int liIndex ; long hexToDec ; unsigned char length[4]; for (liIndex = 0; liIndex < 4 ; liIndex++) { length[liIndex]= (unsigned char) *content; printf("\n Hex value is %.2x", length[liIndex]); content++; } hexToDec = strtol(length, NULL, 16); Each array element contains 1 byte of information and I have read 4 bytes. When I execute it, here is the output that I get : Hex value is 00 Hex value is 00 Hex value is 00 Hex value is 01 Chunk length is 0 Can any one please help me understand the error here. Th decimal value should have come out as 1 instead of 0. Regards, darkie

    Read the article

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • C++: building iterator from bits

    - by gruszczy
    I have a bitmap and would like to return an iterator of positions of set bits. Right now I just walk the whole bitmap and if bit is set, then I provide next position. I believe this could be done more effectively: for example build statically array for each combination of bits in single byte and return vector of positions. This can't be done for a whole int, because array would be too big. But maybe there are some better solutions? Do you know any smart algorithms for this?

    Read the article

  • What is stopping data flow with .NET 3.5 asynchronous System.Net.Sockets.Socket?

    - by TonyG
    I have a .NET 3.5 client/server socket interface using the asynchronous methods. The client connects to the server and the connection should remain open until the app terminates. The protocol consists of the following pattern: send stx receive ack send data1 receive ack send data2 (repeat 5-6 while more data) receive ack send etx So a single transaction with two datablocks as above would consist of 4 sends from the client. After sending etx the client simply waits for more data to send out, then begins the next transmission with stx. I do not want to break the connection between individual exchanges or after each stx/data/etx payload. Right now, after connection, the client can send the first stx, and get a single ack, but I can't put more data onto the wire after that. Neither side disconnects, the socket is still intact. The client code is seriously abbreviated as follows - I'm following the pattern commonly available in online code samples. private void SendReceive(string data) { // ... SocketAsyncEventArgs completeArgs; completeArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnSend); clientSocket.SendAsync(completeArgs); // two AutoResetEvents, one for send, one for receive if ( !AutoResetEvent.WaitAll(autoSendReceiveEvents , -1) ) Log("failed"); else Log("success"); // ... } private void OnSend( object sender , SocketAsyncEventArgs e ) { // ... Socket s = e.UserToken as Socket; byte[] receiveBuffer = new byte[ 4096 ]; e.SetBuffer(receiveBuffer , 0 , receiveBuffer.Length); e.Completed += new EventHandler<SocketAsyncEventArgs>(OnReceive); s.ReceiveAsync(e); // ... } private void OnReceive( object sender , SocketAsyncEventArgs e ) {} // ... if ( e.BytesTransferred > 0 ) { Int32 bytesTransferred = e.BytesTransferred; String received = Encoding.ASCII.GetString(e.Buffer , e.Offset , bytesTransferred); dataReceived += received; } autoSendReceiveEvents[ SendOperation ].Set(); // could be moved elsewhere autoSendReceiveEvents[ ReceiveOperation ].Set(); // releases mutexes } The code on the server is very similar except that it receives first and then sends a response - the server is not doing anything (that I can tell) to modify the connection after it sends a response. The problem is that the second time I hit SendReceive in the client, the connection is already in a weird state. Do I need to do something in the client to preserve the SocketAsyncEventArgs, and re-use the same object for the lifetime of the socket/connection? I'm not sure which eventargs object should hang around during the life of the connection or a given exchange. Do I need to do something, or Not do something in the server to ensure it continues to Receive data? The server setup and response processing looks like this: void Start() { // ... listenSocket.Bind(...); listenSocket.Listen(0); StartAccept(null); // note accept as soon as we start. OK? mutex.WaitOne(); } void StartAccept(SocketAsyncEventArgs acceptEventArg) { if ( acceptEventArg == null ) { acceptEventArg = new SocketAsyncEventArgs(); acceptEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(OnAcceptCompleted); } Boolean willRaiseEvent = this.listenSocket.AcceptAsync(acceptEventArg); if ( !willRaiseEvent ) ProcessAccept(acceptEventArg); // ... } private void OnAcceptCompleted( object sender , SocketAsyncEventArgs e ) { ProcessAccept(e); } private void ProcessAccept( SocketAsyncEventArgs e ) { // ... SocketAsyncEventArgs readEventArgs = new SocketAsyncEventArgs(); readEventArgs.SetBuffer(dataBuffer , 0 , Int16.MaxValue); readEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnIOCompleted); readEventArgs.UserToken = e.AcceptSocket; dataReceived = ""; // note server is degraded for single client/thread use // As soon as the client is connected, post a receive to the connection. Boolean willRaiseEvent = e.AcceptSocket.ReceiveAsync(readEventArgs); if ( !willRaiseEvent ) this.ProcessReceive(readEventArgs); // Accept the next connection request. this.StartAccept(e); } private void OnIOCompleted( object sender , SocketAsyncEventArgs e ) { // switch ( e.LastOperation ) case SocketAsyncOperation.Receive: ProcessReceive(e); // similar to client code // operate on dataReceived here case SocketAsyncOperation.Send: ProcessSend(e); // similar to client code } // execute this when a data has been processed into a response (ack, etc) private SendResponseToClient(string response) { // create buffer with response // currentEventArgs has class scope and is re-used currentEventArgs.SetBuffer(sendBuffer , 0 , sendBuffer.Length); Boolean willRaiseEvent = currentClient.SendAsync(currentEventArgs); if ( !willRaiseEvent ) ProcessSend(currentEventArgs); } A .NET trace shows the following when sending ABC\r\n: Socket#7588182::SendAsync() Socket#7588182::SendAsync(True#1) Data from Socket#7588182::FinishOperation(SendAsync) 00000000 : 41 42 43 0D 0A Socket#7588182::ReceiveAsync() Exiting Socket#7588182::ReceiveAsync() - True#1 And it stops there. It looks just like the first send from the client but the server shows no activity. I think that could be info overload for now but I'll be happy to provide more details as required. Thanks!

    Read the article

  • Thread safe lockfree mutual ByteArray queue

    - by user313421
    A byte stream should be transferred and there is one producer thread and a consumer one. Speed of producer is higher than consumer most of the time, and I need enough buffered data for QoS of my application. I read about my problem and there are solutions like shared buffer, PipeStream .NET class ... This class is going to be instantiated many times on server so I need and optimized solution. Is it good idea to use a Queue of ByteArray ? If yes, I'll use an optimization algorithm to guess the Queue size and each ByteArray capacity and theoretically it fits my case. If no, I what's the best approach ? Please let me know if there's a good lock free thread safe implementation of ByteArray Queue in C# or VB. Thanks in advance

    Read the article

  • Open an attachment for editing and save changes made to it

    - by Seitaridis
    My Lotus Notes document has a rich text item that stores an attachment. I want to edit the attachment and after this to save the attachment back to the Lotus Notes document. This is the script that launches the attachment: @Command([EditGotoField];"Attachment"); @Command([EditSelectAll]); @Command([AttachmentLaunch]); @Command([EditDeselectAll]) This script opens the attachment, but the changes made are not reflected to the Lotus Document. One way of solving this is to add AttachmentActionDefault=2 as an entry to the notes.ini. This enables to edit the attachment when double clicking on attachment. Also using the right click on the attachment, and then choosing edit action, produces the same result. In both cases, after saving, the changes are reflected back to the Lotus Notes document. The problem is that I want to use a button for opening the attachment.

    Read the article

  • jquery ajax image

    - by Nishima
    Hi all, I am sending an ajax request on dblick for creating the image of the screen where it is double clicked.I am ising imagegrabscreen() function of PHP to create image but instead of creating its image it is creates a black image. dblclick(function (ev,ui) { var response = $.ajax({ type:"POST", url: "grabImage.php", data:"name=John&location=Boston&function_name=img", complete: function(msg){ var resp = msg.responseText; if(msg && msg.readyState != 4) { alert("Ready State :"+msg.readyState); return; } else{ //wb_load(); alert( "Data Saved: " + resp); } } }); } ); GRAB IMAGE FUNCTION function img() { $im = imagegrabscreen(); imagepng($im, "C:\myscreenshot.png"); //echo $im; //imagedestroy($im); return $im; define('imge',$im); }

    Read the article

  • Handling over-long UTF-8 sequences

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle over-long utf8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long utf8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • What happens to date-times and booleans when using DbLinq with SQLite?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm imaging a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not so bad. If you have any experience with this or a similar scenario, what are your "lessons learned"?

    Read the article

  • Prototype Library use of !! operator

    - by Rajat
    Here is a snippet from Prototype Javascript Library : Browser: (function(){ var ua = navigator.userAgent; var isOpera = Object.prototype.toString.call(window.opera) == '[object Opera]'; return { IE: !!window.attachEvent && !isOpera, Opera: isOpera, WebKit: ua.indexOf('AppleWebKit/') > -1, Gecko: ua.indexOf('Gecko') > -1 && ua.indexOf('KHTML') === -1, MobileSafari: /Apple.*Mobile/.test(ua) } })(), This is all good and i understand the objective of creating a browser object. One thing that caught my eye and I haven't been able to figure out is the use of double not operator !! in the IE property. If you read through the code you will find it at many other places. I dont understand whats the difference between !!window.attachEvent and using just window.attachEvent. Is it just a convention or is there more to it that's not obvious?

    Read the article

  • Write wave files to memory in Java

    - by Cliff
    I'm trying to figure out why my servlet code creates wave files with improper headers. I use: AudioSystem.write( new AudioInputStream( new ByteArrayInputStream(memoryBytes), new AudioFormat(22000, 16, 1, true,false), memoryBytes.length ), AudioFileFormat.Type.WAVE, servletOutputStream ); taking a byte array from memory containing raw PCM samples and a servlet output stream that gets returned to the client. In the result I get a normal wave file but with zeros in the chunk size fields. Is the API broken? I would think that the size could be filled in using the size passed in the audio input stream. But now, after typing this out I'm thinking its not making this info available to the outer write() method on AudioSystem. It seems like the AudioSystem.write call needs a size parameter unless it is able to pull the size from the stream... which wouldn't work with an arbitrary sized stream. Does anyone know how to make this example work?

    Read the article

  • asp.net compare validators to allow comma and dot (both!) as decimal separator

    - by DanC
    I am using a compare validator, which validates that the entered number is a valid double and also validates it against a given value (greater than zero). I am validating money amounts. Because of the location where the app is used, the locale sets the comma as the decimal separator. The problem is that when a user enters the value using the numeric keyboard, the number gets written with the dot as decimal separator, and is rejected by the validation. I'd like to have this validation done before triggering a postback (like a customvalidator would) and accepting both separators. Any ideas? Thanks

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >