Search Results

Search found 1130 results on 46 pages for 'encode'.

Page 33/46 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • To use an api or store a large dataset in a rails app?

    - by Dave
    Hi all- I am working on a site that has the potential to need a LOT of space. Basically we hope to have every video game every created stored in a database along with an image of the cover. There are some api's out there that might be able to help, like GiantBomb's (www.giantbomb.com). We are trying to decide whether to store the data locally and if so where to find that comprehensive a list, or make calls to the api on demand. The problem with the latter is likely latency and also downtime problems. Assuming we want to store it locally here are the questions: 1) Where can we find this kind of data (yes, I looked on google, and no I couldnt find anything:)) 2) What is the most efficient way to encode and store the images? Thanks!

    Read the article

  • Using php to create a password system with chinese characters

    - by WillDonohoe
    Hi guys, I'm having an issue with validating chinese characters against other chinese characters, for example I'm creating a simple password script which gets data from a database, and gets the user input through get. The issue I'm having is for some reason, even though the characters look exactly the same when you echo them out, my if statement still thinks they are different. I have tried using the htmlentities() function to encode the characters, the password from the database encodes nicely, giving me a working '& #35441;' (I've put a space in it to stop it from converting to a chinese character!). The other user input value gives me a load of funny characters. The only thing which I believe must be breaking it, is it encodes in a different way and therefore the php thinks it's 2 completely different strings. Does anybody have any ideas? Thanks in advance, Will

    Read the article

  • Trying to send XML via EMail and the XML includes a byte[]

    - by barbary
    Hello All, I want to send an email that has a machine readable part you cut and paste into an asp.net page and you get the information. I have stored all the information in an object and then used an XMLSerizer to create some xml. It all worked fine until I added some Images as byte[] to the object. If I dump the resulting string to disk then I can recreate the object fine but after it appears in the email client and I try to cut and paste it it never works. Clearly there are non standard characters coming out that email clients don't like. Is there some encoding I could apply to my XML that would make it display correctly in an email client? Then I could cut, paste, decode and deserilize to get my object back. Please an example of how to encode the string in c# would be great.

    Read the article

  • How can I represent URL (possibly including query string) as a filename in Java without obscuring the original URL?

    - by jerluc
    Is there any real way to represent a URL (which more than likely will also have a query string) as a filename in Java without obscuring the original URL completely? My first approach was to simply escape invalid characters with arbitrary replacements (for example, replacing "/" with "_", etc). The problem is, as in the example of replacing with underscores is that a URL such as "app/my_app" would become "app_my_app" thus obscuring the original URL completely. I have also attempted to encode all the special characters, however again, seeing crazy %3e %20 etc is really not clear. Thank you for any suggestions.

    Read the article

  • How can I write fast colored output to Console?

    - by Statement
    Hello world! I want to learn if there is another (faster) way to output text to the console application window using C# .net than with the simple Write, BackgroundColor and ForegroundColor methods and properties? I learned that each cell has a background color and a foreground color, and I would like to cache/buffer/write faster than using the mentioned methods. Maybe there is some help using the Out buffer, but I don't know how to encode the colors into the stream, if that is where the color data resides. This is for a retrostyle textbased game I am wanting to implement where I make use of the standard colors and ascii characters for laying out the game. Please help :)

    Read the article

  • communicate with a process in utf-8 on a cp1252 consoless

    - by Mapad
    I need to control a program by sending commands in utf-8 encoding to its standard input. For this I run the program using subprocess.Popen(): proc = Popen("myexecutable.exe", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) proc.stdin.write(u'ééé'.encode('utf_8')) If I run this from a cygwin utf-8 console, it works. If I run it from a windows console (encoding ='cp1252') this doesn't work. Is there a way to make this work without having to install a cygwin utf-8 console on each computer I want it to run from ? (NB: I don't need to output anything to console)

    Read the article

  • How to do hex8 encoding in c?

    - by Tech163
    I am trying to encode a string in hex8 using c. The script I have right now is: int hex8 (char str) { str = printf("%x", str); if(strlen(str) == 1) { return printf("%s", "0", str); } else { return str; } } In this function, I will need to add a 0 ahead of the string if the length is less than 1. I don't know why I'm getting: passing argument 1 of 'strlen' makes pointer from integer without a cast Does anyone know why?

    Read the article

  • mysqldb python escaping ? or %s?

    - by asldkncvas
    Dear Everyone, I am currently using mysqldb. What is the correct way to escape strings in mysqldb arguments? Note that E = lambda x: x.encode('utf-8') 1) so my connection is set with charset='utf8'. These are the errors I am getting for these arguments: w1, w2 = u'??', u'??' 1) self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2))) ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)) ) File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute TypeError: not all arguments converted during string formatting 2) self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2))) This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed. I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • root issues in softwarecenter, synaptic and update manager

    - by user188977
    i have a notebook samsung ativ 2 and ubuntu 12.04 precise, cinnamon desktop. after logging in today my update manager, synaptic and ubuntu softwarecenter stopped working. synaptic i can only launch from terminal the others from panel.when choosing to update, nothing happens. same thing when trying to install programms from syn. or softw.center.when launching softwarec. from terminal i get: marcus@ddddddddd:~$ software-center 2013-11-10 22:30:46,206 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2013-11-10 22:30:46,217 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2013-11-10 22:30:46,977 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2013-11-10 22:30:47,320 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2013-11-10 22:30:48,057 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() 2013-11-10 22:31:00,646 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/share/software-center/softwarecenter/utils.py', 201, 'get_title_from_html')' 2013-11-10 22:31:00,645 - root - WARNING - failed to parse: '<div style="background-color: #161513; width:1680px; height:200px;">  <div style="background: url('/site_media/exhibits/2013/09/AAMFP_Leaderboard_700x200_1.jpg') top left no-repeat; width:700px; height:200px;"></div> </div>' ('ascii' codec can't encode character u'\xa0' in position 70: ordinal not in range(128)) 2013-11-10 22:31:02,268 - softwarecenter.db.update - INFO - skipping region restricted app: 'Comentarios Web' (not whitelisted) 2013-11-10 22:31:02,769 - softwarecenter.db.update - INFO - skipping region restricted app: 'reEarCandy' (not whitelisted) 2013-11-10 22:31:04,821 - softwarecenter.db.update - INFO - skipping region restricted app: 'Flaggame' (not whitelisted) 2013-11-10 22:31:05,622 - softwarecenter.db.update - INFO - skipping region restricted app: 'Bulleti d'esquerra de Calonge i Sant Antoni ' (not whitelisted) 2013-11-10 22:31:08,352 - softwarecenter.ui.gtk3.app - INFO - software-center-agent finished with status 0 2013-11-10 22:31:08,353 - softwarecenter.db.database - INFO - reopen() database 2013-11-10 22:31:08,353 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2013-11-10 22:33:32,319 - softwarecenter.backend - WARNING - _on_trans_error: org.freedesktop.PolicyKit.Error.Failed: ('system-bus-name', {'name': ':1.72'}): org.debian.apt.install-or-remove-packages 2013-11-10 22:36:01,818 - softwarecenter.backend - WARNING - daemon dies, ignoring: <AptTransaction object at 0x48e4b40 (aptdaemon+client+AptTransaction at 0x645aaa0)> exit-failed 2013-11-10 22:36:01,820 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open()

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • ASP.Net MVC2 CustomModelBinder not working... Changed from MVC1

    - by Ian
    (My apologies if this seems verbose - trying to provide all relevant code) I've just upgraded to VS2010, and am now having trouble trying to get a new CustomModelBinder working. In MVC1 I would have written something like public class AwardModelBinder: DefaultModelBinder { : public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { // do the base binding to bind all simple types Award award = base.BindModel(controllerContext, bindingContext) as Award; // Get complex values from ValueProvider dictionary award.EffectiveFrom = Convert.ToDateTime(bindingContext.ValueProvider["Model.EffectiveFrom"].AttemptedValue.ToString()); string sEffectiveTo = bindingContext.ValueProvider["Model.EffectiveTo"].AttemptedValue.ToString(); if (sEffectiveTo.Length > 0) award.EffectiveTo = Convert.ToDateTime(bindingContext.ValueProvider["Model.EffectiveTo"].AttemptedValue.ToString()); // etc return award; } } Of course I'd register the custom binder in Global.asax.cs: protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // register custom model binders ModelBinders.Binders.Add(typeof(Voucher), new VoucherModelBinder(DaoFactory.UserInstance("EH1303"))); ModelBinders.Binders.Add(typeof(AwardCriterion), new AwardCriterionModelBinder(DaoFactory.UserInstance("EH1303"), new VOPSDaoFactory())); ModelBinders.Binders.Add(typeof(SelectedVoucher), new SelectedVoucherModelBinder(DaoFactory.UserInstance("IT0706B"))); ModelBinders.Binders.Add(typeof(Award), new AwardModelBinder(DaoFactory.UserInstance("IT0706B"))); } Now, in MVC2, I'm finding that my call to base.BindModel returns an object where everything is null, and I simply don't want to have to iterate all the form fields surfaced by the new ValueProvider.GetValue() function. Google finds no matches for this error, so I assume I'm doing something wrong. Here's my actual code: My domain object (infer what you like about the encapsulated child objects - I know I'll need custom binders for those too, but the three "simple" fields (ie. base types) Id, TradingName and BusinessIncorporated are also coming back null): public class Customer { /// <summary> /// Initializes a new instance of the Customer class. /// </summary> public Customer() { Applicant = new Person(); Contact = new Person(); BusinessContact = new ContactDetails(); BankAccount = new BankAccount(); } /// <summary> /// Gets or sets the unique customer identifier. /// </summary> public int Id { get; set; } /// <summary> /// Gets or sets the applicant details. /// </summary> public Person Applicant { get; set; } /// <summary> /// Gets or sets the customer's secondary contact. /// </summary> public Person Contact { get; set; } /// <summary> /// Gets or sets the trading name of the business. /// </summary> [Required(ErrorMessage = "Please enter your Business or Trading Name")] [StringLength(50, ErrorMessage = "A maximum of 50 characters is permitted")] public string TradingName { get; set; } /// <summary> /// Gets or sets the date the customer's business began trading. /// </summary> [Required(ErrorMessage = "You must supply the date your business started trading")] [DateRange("01/01/1900", "01/01/2020", ErrorMessage = "This date must be between {0} and {1}")] public DateTime BusinessIncorporated { get; set; } /// <summary> /// Gets or sets the contact details for the customer's business. /// </summary> public ContactDetails BusinessContact { get; set; } /// <summary> /// Gets or sets the customer's bank account details. /// </summary> public BankAccount BankAccount { get; set; } } My controller method: /// <summary> /// Saves a Customer object from the submitted application form. /// </summary> /// <param name="customer">A populate instance of the Customer class.</param> /// <returns>A partial view indicating success or failure.</returns> /// <httpmethod>POST</httpmethod> /// <url>/Customer/RegisterCustomerAccount</url> [HttpPost] [ValidateAntiForgeryToken] public ActionResult RegisterCustomerAccount(Customer customer) { if (ModelState.IsValid) { // save the Customer // return indication of success, or otherwise return PartialView(); } else { ViewData.Model = customer; // load necessary reference data into ViewData ViewData["PersonTitles"] = new SelectList(ReferenceDataCache.Get("PersonTitle"), "Id", "Name"); return PartialView("CustomerAccountRegistration", customer); } } My custom binder: public class CustomerModelBinder : DefaultModelBinder { public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { ValueProviderResult vpResult = bindingContext .ValueProvider.GetValue(bindingContext.ModelName); // vpResult is null // MVC2 - ValueProvider is now an IValueProvider, not dictionary based anymore if (bindingContext.ValueProvider.GetValue("Model.Applicant.Title") != null) { // works } Customer customer = base.BindModel(controllerContext, bindingContext) as Customer; // customer instanitated with null (etc) throughout return customer; } } My binder registration: /// <summary> /// Application_Start is called once when the web application is first accessed. /// </summary> protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // register custom model binders ModelBinders.Binders.Add(typeof(Customer), new CustomerModelBinder()); ReferenceDataCache.Populate(); } ... and a snippet from my view (could this be a prefix problem?) <div class="inputContainer"> <label class="above" for="Model_Applicant_Title" accesskey="t"><span class="accesskey">T</span>itle<span class="mandatoryfield">*</span></label> <%= Html.DropDownList("Model.Applicant.Title", ViewData["PersonTitles"] as SelectList, "Select ...", new { @class = "validate[required]" })%> <% Html.ValidationMessageFor(model => model.Applicant.Title); %> </div> <div class="inputContainer"> <label class="above" for="Model_Applicant_Forename" accesskey="f"><span class="accesskey">F</span>orename / First name<span class="mandatoryfield">*</span></label> <%= Html.TextBox("Model.Applicant.Forename", Html.Encode(Model.Applicant.Forename), new { @class = "validate[required,custom[onlyLetter],length[2,20]]", title="Enter your forename", maxlength = 20, size = 20, autocomplete = "off", onkeypress = "return maskInput(event,re_mask_alpha);" })%> </div> <div class="inputContainer"> <label class="above" for="Model_Applicant_MiddleInitials" accesskey="i">Middle <span class="accesskey">I</span>nitial(s)</label> <%= Html.TextBox("Model.Applicant.MiddleInitials", Html.Encode(Model.Applicant.MiddleInitials), new { @class = "validate[optional,custom[onlyLetter],length[0,8]]", title = "Please enter your middle initial(s)", maxlength = 8, size = 8, autocomplete = "off", onkeypress = "return maskInput(event,re_mask_alpha);" })%> </div>

    Read the article

  • Embed album art in OGG through command line in linux

    - by teratomata
    I want to convert my music from flac to ogg, and currently oggenc does that perfectly except for album art. Metaflac can output album art, however there seems to be no command line tool to embed album art into ogg. MP3Tag and EasyTag are able to do it, and there is a specification for it here which calls for the image to be base64 encoded. However so far I have been unsuccessful in being able to take an image file, converting it to base64 and embedding it into an ogg file. If I take a base64 encoded image from an ogg file that already has the image embedded, I can easily embed it into another image using vorbiscomment: vorbiscomment -l withimage.ogg > textfile vorbiscomment -c textfile noimage.ogg My problem is taking something like a jpeg and converting it to base64. Currently I have: base64 --wrap=0 ./image.jpg Which gives me the image file converted to base64, using vorbiscomment and following the tagging rules, I can embed that into an ogg file like so: echo "METADATA_BLOCK_PICTURE=$(base64 --wrap=0 ./image.jpg)" > ./folder.txt vorbiscomment -c textfile noimage.ogg However this gives me an ogg whose image does not work properly. I noticed when comparing the base64 strings that all properly embedding pictures have a header line but all the base64 strings I generate are lacking this header. Further analysis of the header: od -c header.txt 0000000 \0 \0 \0 003 \0 \0 \0 \n i m a g e / j p 0000020 e g \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 \0 \0 \0 \0 \0 \0 \0 \0 035 332 0000052 Which follows the spec given above. Notice 003 corresponds to front cover and image/jpeg is the mime type. So finally, my question is, how can I base64 encode a file and generate this header along with it for embedding into an ogg file?

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • Choosing the right TV tuner - USB or PCI TV tuners, hardware/software, DVB? Hybrid/combo/analog?

    - by Nucleon
    Greetings, I'll start with some background information so you know what I'm trying to accomplish and then get to my question. I work at a Television station in the US and we are working on setting up an online DVR/Podcast system for all of our newscasts. So basically we would be recording every newscast in HD, encoding it to flv/h.264 for viewing in a browser on flash compatible and iphone/ipad devices, eventually migrating to WebM when it's browser compliant. This task is theoretically pretty simple as it all it involves is a TV tuner device and a program like VLC, MythTV or whatever to schedule and dump it to a file, encode it with VLC/FFMPEG and push it to the streaming server. Now to the hardware, in order to accomplish that task, should I use an internal PCI tuner or a USB 2.0 tuner? Is there a difference? The bus speeds of both are not too far apart, and is the bus speed really relevant in this case? Does it matter if the device has a hardware encoder or a software encoder? On many sites the USB was recommended for ease of set up and use, but would it overly task a processor, or is that not a concern as long as it's a decent PC (at least duo core, 6gb ram). What's the difference between the stick USB and the Box USBs? To my understanding analog is basically gone in the US, so we would want a hybrid or combo tuner correct? How do those differ from DVB? Are there any other features or concepts which I am missing which may influence the recommended product. It would be ideal if the device which could work in both Linux and a Windows environment, to my knowledge most Hauppauge are? Example 1: PCI Hauppage http://www.newegg.com/Product/Product.aspx?Item=N82E16815116033 Example 2: USB 2.0 Box http://www.newegg.com/Product/Product.aspx?Item=N82E16815116029 Example 3: USB 2.0 Stick http://www.newegg.com/Product/Product.aspx?Item=N82E16815116031 Any guidance from the Superusers would be much appreciated!

    Read the article

  • Convert DVD Movie to MPEG and view on PS3 via Windows Media Server 12

    - by Vidar
    I think Apollo spacecraft missions to the moon were easier than this! I have tried dozens of DVD ripping software and media servers and have had limited success in trying to convert all my DVDs into file format so they can be viewed on PS3. I have also been on dozens of forums and it's all getting a bit confusing, some advice is out of date, some software is no longer updated - updates have been applied to PS3 operating system and windows and so on and so on. There has to be a way to get all this knowledge and information in one place that's up to date so people can do the same thing as me. Can anyone give me some definitive software and/or advice to do the following: I have over 200 DVDs - I want to convert these to VOB files (rename to MPEG so WMS can stream them). Store on hard disk and view via Windows Media Server 12 (Windows 7). I will then be able to view these via my PS3 in my lounge and never have to get out another DVD case again. I don't want to encode to any other format like MP4 with H.264 because I will lose some of the original quality. So MPEG-2 is fine for me. Note: I have been using DVD Shrink but it gives odd results sometimes. The main problem being that once the DVD has been ripped - WMS shows the wrong playing length of the film, however if I use VLC Media Player it will play through the whole film OK. This is obviously no good when it comes to streaming on the PS3.

    Read the article

  • NoClassDefFoundError and Netty

    - by Dmytro Leonenko
    Hi. First to say I'm n00b in Java. I can understand most concepts but in my situation I want somebody to help me. I'm using JBoss Netty to handle simple http request and using MemCachedClient check existence of client ip in memcached. import org.jboss.netty.channel.ChannelHandler; import static org.jboss.netty.handler.codec.http.HttpHeaders.*; import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*; import static org.jboss.netty.handler.codec.http.HttpResponseStatus.*; import static org.jboss.netty.handler.codec.http.HttpVersion.*; import com.danga.MemCached.*; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelUpstreamHandler; import org.jboss.netty.handler.codec.http.Cookie; import org.jboss.netty.handler.codec.http.CookieDecoder; import org.jboss.netty.handler.codec.http.CookieEncoder; import org.jboss.netty.handler.codec.http.DefaultHttpResponse; import org.jboss.netty.handler.codec.http.HttpChunk; import org.jboss.netty.handler.codec.http.HttpChunkTrailer; import org.jboss.netty.handler.codec.http.HttpRequest; import org.jboss.netty.handler.codec.http.HttpResponse; import org.jboss.netty.handler.codec.http.HttpResponseStatus; import org.jboss.netty.handler.codec.http.QueryStringDecoder; import org.jboss.netty.util.CharsetUtil; /** * @author <a href="http://www.jboss.org/netty/">The Netty Project</a> * @author Andy Taylor ([email protected]) * @author <a href="http://gleamynode.net/">Trustin Lee</a> * * @version $Rev: 2368 $, $Date: 2010-10-18 17:19:03 +0900 (Mon, 18 Oct 2010) $ */ @SuppressWarnings({"ALL"}) public class HttpRequestHandler extends SimpleChannelUpstreamHandler { private HttpRequest request; private boolean readingChunks; /** Buffer that stores the response content */ private final StringBuilder buf = new StringBuilder(); protected MemCachedClient mcc = new MemCachedClient(); private static SockIOPool poolInstance = null; static { // server list and weights String[] servers = { "lcalhost:11211" }; //Integer[] weights = { 3, 3, 2 }; Integer[] weights = {1}; // grab an instance of our connection pool SockIOPool pool = SockIOPool.getInstance(); // set the servers and the weights pool.setServers(servers); pool.setWeights(weights); // set some basic pool settings // 5 initial, 5 min, and 250 max conns // and set the max idle time for a conn // to 6 hours pool.setInitConn(5); pool.setMinConn(5); pool.setMaxConn(250); pool.setMaxIdle(21600000); //1000 * 60 * 60 * 6 // set the sleep for the maint thread // it will wake up every x seconds and // maintain the pool size pool.setMaintSleep(30); // set some TCP settings // disable nagle // set the read timeout to 3 secs // and don't set a connect timeout pool.setNagle(false); pool.setSocketTO(3000); pool.setSocketConnectTO(0); // initialize the connection pool pool.initialize(); // lets set some compression on for the client // compress anything larger than 64k //mcc.setCompressEnable(true); //mcc.setCompressThreshold(64 * 1024); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { HttpRequest request = this.request = (HttpRequest) e.getMessage(); if(mcc.get(request.getHeader("X-Real-Ip")) != null) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setHeader("X-Accel-Redirect", request.getUri()); ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } else { sendError(ctx, NOT_FOUND); } } private void writeResponse(MessageEvent e) { // Decide whether to close the connection or not. boolean keepAlive = isKeepAlive(request); // Build the response object. HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setContent(ChannelBuffers.copiedBuffer(buf.toString(), CharsetUtil.UTF_8)); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); if (keepAlive) { // Add 'Content-Length' header only for a keep-alive connection. response.setHeader(CONTENT_LENGTH, response.getContent().readableBytes()); } // Encode the cookie. String cookieString = request.getHeader(COOKIE); if (cookieString != null) { CookieDecoder cookieDecoder = new CookieDecoder(); Set<Cookie> cookies = cookieDecoder.decode(cookieString); if(!cookies.isEmpty()) { // Reset the cookies if necessary. CookieEncoder cookieEncoder = new CookieEncoder(true); for (Cookie cookie : cookies) { cookieEncoder.addCookie(cookie); } response.addHeader(SET_COOKIE, cookieEncoder.encode()); } } // Write the response. ChannelFuture future = e.getChannel().write(response); // Close the non-keep-alive connection after the write operation is done. if (!keepAlive) { future.addListener(ChannelFutureListener.CLOSE); } } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { e.getCause().printStackTrace(); e.getChannel().close(); } private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); response.setContent(ChannelBuffers.copiedBuffer( "Failure: " + status.toString() + "\r\n", CharsetUtil.UTF_8)); // Close the connection as soon as the error message is sent. ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } } When I try to send request like http://127.0.0.1:8090/1/2/3 I'm getting java.lang.NoClassDefFoundError: com/danga/MemCached/MemCachedClient at httpClientValidator.server.HttpRequestHandler.<clinit>(HttpRequestHandler.java:66) I believe it's not related to classpath. May be it's related to context in which mcc doesn't exist. Any help appreciated EDIT: Original code http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/http/snoop/package-summary.html I've modified some parts to fit my needs.

    Read the article

  • New Dvds with 99 Title tracks, which one is the correct track?

    - by Mike Fielden
    I have embarked on the task of backing up my DVD collection. I have noticed that some of the newer movies I am attempting to rip contain 99 Title tracks all with approximately equal overall run times. I use MacTheRipper to rip the DVDs and Handbrake to encode them. My question is, is there a site somewhere that has information regarding which Title track to select? Disclaimer: I cannot stress this enough, I legally own these DVDs. I am merely making a digital copy. Two examples of such DVDs are Star Trek and Carriers. UPDATE: Just an FYI each most of these 99 tracks appear to be the full length tracks. There times look to be very similar to the overall movie run time (within a few seconds of each other). So using the time isn't a valid way to tell which is the correct track. Opening the movie with VLC seems to be the best way to tell. Thank you all.

    Read the article

  • Join mp4 files in linux

    - by Jose Armando
    I want to join two mp4 files to create a single one. The video streams are encoded in h264 and the audio in aac. I can not re-encode the videos to another format due to computational reasons. Also, I cannot use any gui programs, all processing must be performed with linux command line utilities. FFmpeg cannot do this for mpeg4 files so instead I used MP4Box e.g. MP4Box -add video1.mp4 -cat video2.mp4 newvideo.mp4 unfortunately the audio gets all mixed up. I thought that the problem was that the audio was in aac so I transcoded it in mp3 and used again MP4Box. In this case the audio is fine for the first half of newvideo.mp4 (corresponding to video1.mp4) but then their is no audio and I cannot navigate in the video also. My next thought was that the audio and video streams had some small discrepancies in their lengths that I should fix. So for each input video I splitted the video and audio streams and then joined them with the -shortest option in ffmpeg. thus for the first video I ran avconv -y -i video1.mp4 -c copy -map 0:0 videostream1.mp4 avconv -y -i video1.mp4 -c copy -map 0:1 audiostream1.m4a avconv -y -i videostream1.mp4 -i audiostream1.m4a -c copy -shortest video1_aligned.mp4 similarly for the second video and then used MP4Box as previously. Unfortunately this didn't work either. The only success I had was when I joined the video streams separetely (i.e. videostream1.mp4 and videostream2.mp4) and the audio streams (i.e. audiostream1.m4a and audiostream2.m4a) and then joined the video and audio in a final file. However, the synchronization is lost for the second half of the video. Concretelly, there is a 1 sec delay of audio and video. Any suggestions are really welcome.

    Read the article

  • Memory not being freed, causing giant memory leak

    - by Delan Azabani
    In my Unicode library for C++, the ustring class has operator= functions set for char* values and other ustring values. When doing the simple memory leak test: #include <cstdio> #include "ucpp" main() { ustring a; for(;;)a="MEMORY"; } the memory used by the program grows uncontrollably (characteristic of a program with a big memory leak) even though I've added free() calls to both of the functions. I am unsure why this is ineffective (am I missing free() calls in other places?) This is the current library code: #include <cstdlib> #include <cstring> class ustring { int * values; long len; public: long length() { return len; } ustring() { len = 0; values = (int *) malloc(0); } ustring(const ustring &input) { len = input.len; values = (int *) malloc(sizeof(int) * len); for (long i = 0; i < len; i++) values[i] = input.values[i]; } ustring operator=(ustring input) { ustring result(input); free(values); len = input.len; values = input.values; return * this; } ustring(const char * input) { values = (int *) malloc(0); long s = 0; // s = number of parsed chars int a, b, c, d, contNeed = 0, cont = 0; for (long i = 0; input[i]; i++) if (input[i] < 0x80) { // ASCII, direct copy (00-7f) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = input[i]; } else if (input[i] < 0xc0) { // this is a continuation (80-bf) if (cont == contNeed) { // no need for continuation, use U+fffd values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } cont = cont + 1; values[s - 1] = values[s - 1] | ((input[i] & 0x3f) << ((contNeed - cont) * 6)); if (cont == contNeed) cont = contNeed = 0; } else if (input[i] < 0xc2) { // invalid byte, use U+fffd (c0-c1) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } else if (input[i] < 0xe0) { // start of 2-byte sequence (c2-df) contNeed = 1; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x1f) << 6; } else if (input[i] < 0xf0) { // start of 3-byte sequence (e0-ef) contNeed = 2; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x0f) << 12; } else if (input[i] < 0xf5) { // start of 4-byte sequence (f0-f4) contNeed = 3; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x07) << 18; } else { // restricted or invalid (f5-ff) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } len = s; } ustring operator=(const char * input) { ustring result(input); free(values); len = result.len; values = result.values; return * this; } ustring operator+(ustring input) { ustring result; result.len = len + input.len; result.values = (int *) malloc(sizeof(int) * result.len); for (long i = 0; i < len; i++) result.values[i] = values[i]; for (long i = 0; i < input.len; i++) result.values[i + len] = input.values[i]; return result; } ustring operator[](long index) { ustring result; result.len = 1; result.values = (int *) malloc(sizeof(int)); result.values[0] = values[index]; return result; } operator char * () { return this -> encode(); } char * encode() { char * r = (char *) malloc(0); long s = 0; for (long i = 0; i < len; i++) { if (values[i] < 0x80) r = (char *) realloc(r, s + 1), r[s + 0] = char(values[i]), s += 1; else if (values[i] < 0x800) r = (char *) realloc(r, s + 2), r[s + 0] = char(values[i] >> 6 | 0x60), r[s + 1] = char(values[i] & 0x3f | 0x80), s += 2; else if (values[i] < 0x10000) r = (char *) realloc(r, s + 3), r[s + 0] = char(values[i] >> 12 | 0xe0), r[s + 1] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 2] = char(values[i] & 0x3f | 0x80), s += 3; else r = (char *) realloc(r, s + 4), r[s + 0] = char(values[i] >> 18 | 0xf0), r[s + 1] = char(values[i] >> 12 & 0x3f | 0x80), r[s + 2] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 3] = char(values[i] & 0x3f | 0x80), s += 4; } return r; } };

    Read the article

  • How do I format a text file for IIS Mailroot Pickup so that it sends an e-mail with attachments?

    - by Ben McCormack
    How do I need to format a text file so that it can be read by an SMTP service to send an e-mail that has an attachment? We have a server where we are using II6 SMTP to send mail from a Pickup folder. The goal is to drop a properly formatted text file into Mailroot\Pickup and then the file will be automatically processed and sent to the correct SMTP recipient. For simple files, this works correctly. Here's an example of a simple file that works (domain names changed): From:[email protected] To:[email protected] Subject:Hello World! Test Body Of The E-mail When I drop a text file containing the above contents into the Mailroot\Pickup folder, it sends correctly. However, I haven't been able to figure out how to get an attachment to work. I found some material that explained how to encode an SMTP attachment and another tool for simple base64 encoding conversion. Using the information on those pages, I came up with the following text: From:[email protected] To:[email protected] Subject:Hello World! MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- However, when I place the above text in a file and drop it into Mailroot\Pickup, it doesn't send an attachment correctly. Instead, an e-mail shows up with the following in the body of the e-mail: MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- I can't figure out what I need to do to format the text file so that the SMTP service correctly sends attachments.

    Read the article

  • tod to avi mpg wmv, convert tod (.mpeg-2) to avi mpg wmv for Movie Maker.

    - by yearofhao
    Need to convert .tod (mpeg-2) to avi mpg wmv download from JVC Everio to PC with tod to avi mpg wmv converter convert tod to uncompressed/raw avi, mpg wmv Have a JVC Everio camcorder? Then you may encounter problems when saving the .tod files to your computer windows movie maker says it can't recognize and edit them to make videos. You may play them using media player but the problem is how to edit them? The bundled software Power cinema could be annoying, since you can only edit when the camera is plugged in to the PC - Power cinema can’t seem to edit from the saved clips alone. So, how do you save them to PC so that you can edit them without the camera and also using windows movie maker? JVC Everio Tod to avi mpg wmv converter costs you a penny to but help you perfectly convert tod file to AVI, MPG, WMV, YouTube FLV, MP4, DV, QuickTime.MOV or other common video formats with fast speed and while keeping the original HD quality. High definition TOD recordings from JVC Camcorders can playback fluidly, convert smoothly and edit professionally on with iOrgsoft TOD file converter iOrgsoft tod to avi mpg wmv Converter has been mostly used by Windows users who use Windows 7 or Vista, after the .tod (mpeg-2 the same codec) downloaded from JVC Everio to PC, it’s best to convert tod to avi, convert to xvid divx, convert tod to uncompressed avi or convert tod to raw avi, tod to mpg, tod to wmv, which are three Windows movie maker best formats to import. TOD to avi mpg wmv converter is a competent video-editing program that allows you to clip/cut TOD video clip, crop the video to encode, and help transfers video to devices like iPhone, iPod, to HDTV connected with Apple TV.

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >