Search Results

Search found 368 results on 15 pages for 'decoding'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • How do I pass a cookie to a Sinatra app using curl?

    - by Brandon Toone
    I'm using the code from the example titled "A Slightly Bigger Example" from this tutorial http://rubylearning.com/blog/2009/09/30/cookie-based-sessions-in-sinatra/ to figure out how to send a cookie to a Sinatra application but I can't figure out how to set the values correctly When I set the name to be "brandon" in the application it creates a cookie with a value of BAh7BiIJdXNlciIMYnJhbmRvbg%3D%3D%0A which is a url encoding (http://ostermiller.org/calc/encode.html) of the value BAh7BiIJdXNlciIMYnJhbmRvbg== Using that value I can send a cookie to the app correctly curl -b "rack.session=BAh7BiIJdXNlciIMYnJhbmRvbg==" localhost:9393 I'm pretty sure that value is a base64 encoding of the ruby hash for the session since the docs (http://rack.rubyforge.org/doc/classes/Rack/Session/Cookie.html) say The session is a Ruby Hash stored as base64 encoded marshalled data set to :key (default: rack.session). I thought that meant all I had to do was base64 encode {"user"=>"brandon"} and use it in the curl command. Unfortunately that creates a different value than BAh7BiIJdXNlciIMYnJhbmRvbg==. Next I tried taking the base64 encoded value and decoding it at various base64 decoders online but that results in strange characters (a box symbol and others) so I don't know how to recreate the value to even encode it. So my question is do you know what characters/format I need to get the proper base64 encoding and/or do you know of another way to pass a value using curl such that it will register as a proper cookie for a Sinatra app?

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Encode and Decode using UTF-8 in iphone

    - by Ekra
    Hi friends, I wanted an example were in I can encode and then decode the same string using UTF-8. Encode and then Decode means I want to implement the method in 2 area were one can encode it and other is able to decode it. I have seen the API but I didnt get much success:- StringWithCString:encoding: stringWithUTF8String: stringWithCString:(const char *)cString encoding:(NSStringEncoding)enc; =================EDITED================= I have string as "øæ-test-2.txt" . when I am encoding it char *s = "øæ-test-2.txt"; NSString *enc = [NSString stringWithCString:s encoding:NSASCIIStringEncoding]; I am getting "øæ-test-2.txt" as output. Now I want to get back the original string back i.e. "øæ-test-2.txt" +++++++++EDITED+++++++++++++++++++ I am getting "øæ-test-2.txt" from server and I need "øæ-test-2.txt" by decoding it . I am able to get the output from the link below http://www.cafewebmaster.com/online_tools/utf_decode Please try to use the link and u will understand my concern. I need the solution on urgent basis. It would be highly appreciated if anyone can give some hint or tutorial in right direction. Regards

    Read the article

  • What is the best way to parse Paypal NVP in PHP?

    - by xsaero00
    I need a function that will correctly parse NVP into PHP array. I have been using code provided by paypal but it did not work for when string length was specified next to the name. Here is what i have so far. private function parseNVP($nvpstr) { $intial=0; $nvpArray = array(); while(strlen($nvpstr)) { //postion of Key $keypos= strpos($nvpstr,'='); //position of value $valuepos = strpos($nvpstr,'&') ? strpos($nvpstr,'&'): strlen($nvpstr); /*getting the Key and Value values and storing in a Associative Array*/ $keyval=substr($nvpstr,$intial,$keypos); $vallength=$valuepos-$keypos-1; // check if the length is explicitly specified if($braketpos = strpos($keyval,'[')) { // override value length $vallength = substr($keyval,$braketpos+1,strlen($keyval)-$braketpos-2); // get rid of brackets from key name $keyval = substr($keyval,0,$braketpos); } $valval=substr($nvpstr,$keypos+1,$vallength); //decoding the respose if (isValidXMLString("<".urldecode($keyval).">".urldecode( $valval)."</".urldecode($keyval).">")) $nvpArray[urldecode($keyval)] =urldecode( $valval); $nvpstr=substr($nvpstr,$keypos+$vallength+2,strlen($nvpstr)); } return $nvpArray; } This function works most of the time.

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • UnicodeDecodeError on attempt to save file through django default filebased backend

    - by Ivan Kuznetsov
    When i attempt to add a file with russian symbols in name to the model instance through default instance.file_field.save method, i get an UnicodeDecodeError (ascii decoding error, not in range (128) from the storage backend (stacktrace ended on os.exist). If i write this file through default python file open/write all goes right. All filenames in utf-8. I get this error only on testing Gentoo, on my Ubuntu workstation all works fine. class Article(models.Model): file = models.FileField(null=True, blank=True, max_length = 300, upload_to='articles_files/%Y/%m/%d/') Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 24. return view_func(request, *args, **kwargs) File "/var/www/localhost/help/wiki/views.py" in edit_article 338. new_article.file.save(fp, fi, save=True) File "/usr/lib/python2.6/site-packages/django/db/models/fields/files.py" in save 92. self.name = self.storage.save(name, content) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in save 47. name = self.get_available_name(name) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in get_available_name 73. while self.exists(name): File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in exists 196. return os.path.exists(self.path(name)) File "/usr/lib/python2.6/genericpath.py" in exists 18. st = os.stat(path) Exception Type: UnicodeEncodeError at /edit/ Exception Value: ('ascii', u'/var/www/localhost/help/i/articles_files/2010/03/17/\u041f\u0440\u0438\u0432\u0435\u0442', 52, 58, 'ordinal not in range(128)')

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • c# webbrowser | pushed realtime quotes

    - by Eric
    Hi, I am programmer and share trader. Before I have written a day trading application. Until last week it was possible to fetch realtime quotes from http://aktien.boerse.de/aktien_startseite.php?view=2&order=name%20asc&liste=prime&page=0 . Every time the site was surfed the quotes had changed. The HTML contents could then be decoded with regular expressions (regex). Problem They have stopped this service by today. Now the quotes are not realtime when surfing on the page. The only way to get stock quotes now is to use pushed quotes "Push starten"-Button. However I do not know how to basically fetch them in C#. When I create a webbrowser element the only way which I know to get the push quotes out of it is to give the webbrowser element the focus send key ctrl+A and ctrl+C and insert the data some where for decoding. This is not desired since the control is moved away from the user and in case some other control is clicked during the process this may result in unexpected behaviour. Question So is there a proper way to decode push stock quotes in C#? Thanks a lot in advance, --eric

    Read the article

  • FIFOs implementation

    - by nunos
    Consider the following code: writer.c mkfifo("/tmp/myfifo", 0660); int fd = open("/tmp/myfifo", O_WRONLY); char *foo, *bar; ... write(fd, foo, strlen(foo)*sizeof(char)); write(fd, bar, strlen(bar)*sizeof(char)); reader.c int fd = open("/tmp/myfifo", O_RDONLY); char buf[100]; read(fd, buf, ??); My question is: Since it's not know before hand how many bytes will foo and bar have, how can I know how many bytes to read from reader.c? Because if I, for example, read 10 bytes in reader and foo and bar are together less than 10 bytes, I will have them both in the same variable and that I do not want. Ideally I would have one read function for every variable, but again I don't know before hand how many bytes will the data have. I thought about adding another write instruction in writer.c between the write for foo and bar with a separator and then I would have no problem decoding it from reader.c. Is this the way to go about it? Thanks.

    Read the article

  • Choosing the right web service

    - by Ratan Sharma
    My website currently working in ASP.NET 1.1 Old Process In our database we have huge amount of data stored for a decoding purpose. We have to update this huge set of data table each week(Data is supplied from a vendor). In our website (in asp.net 1.1) we query our database to decode information. New process Now instead of storing data in our database and query them, we want to replace this through the web service, AS now the vendor is supplying us a DLL, which will give us the decoded information. Information on the DLL provided by the vendor The DLL provided, can only be added in 4.0 sites. SO that also impleies that i can not directly add the dll to my 1.1 site. This DLL is exposing certain methods, we simply have to add the DLL refernce in our web service and call the method and fetch the needed information. Thus we will not have to store those information in our database. So which type of web service I should go for (asmx OR WCF) that will use the DLLs provided by vendor to fetch the decoded information ?? Flexibility i am looking for in the web service are: It can be consumed from asp.net 1.1 site directly and also using jQuery ajax. It can be consumed from other web services running on the server. It can be consumed from some windows services running from the server. NOTE : Moreover we have a plan to migrate our website from asp.net 1.1 to 4.0 version in future.So it should be that much supportive for future upgrade.

    Read the article

  • How do I read binary C++ protobuf data using Python protobuf?

    - by nbolton
    The Python version of Google protobuf gives us only: SerializeAsString() Where as the C++ version gives us both: SerializeToArray(...) SerializeAsString() We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string? Is this the correct way of doing it? binary = get_binary_data() binary_size = get_binary_size() string = None for i in range(len(binary_size)): string += i message = new MyMessage() message.ParseFromString(string) Update: Here's a new example, and a problem: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data if not eof: foo_bar = FooBar() foo_bar.ParseFromString(data) When we get to the foo_bar.ParseFromString(data) line, I get this error: Exception Type: DecodeError Exception Value: Too many bytes when decoding varint. Update 2: It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding). This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data string = '' for i in range(0, len(data)): byte = data[i] if byte != '\xcc': # yuck! string += data[i] if not eof: foo_bar = FooBar() foo_bar.ParseFromString(string) There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.

    Read the article

  • Where'd my sounds go?

    - by Dane Man
    In my Row class I have the initWithCoder method and everything restored in that method works fine, but only within the method. After the method is called I loose an array of sounds that is in my Row class. The sounds class also has the initWithCoder method, and the sound plays fine but only in the Row class initWithCoder method. After decoding the Row object, the sound array disappears completely and is unable to be called. Here's my source for the initWithCoder: - (id) initWithCoder:(NSCoder *)coder { ... soundArray = [coder decodeObjectForKey:@"soundArray"]; NSLog(@"%d",[soundArray count]); return self; } the log shows the count as 8 like it should (this is while unarchiving). Then the row object I create gets assigned. And the resulting row object no longer has a soundArray. [NSKeyedArchiver archiveRootObject:row toFile:@"DefaultRow"]; ... row = [NSKeyedUnarchiver unarchiveObjectWithFile:@"DefaultRow"]; So now whenever I call the soundArray it crashes. //ERROR IS HERE NSLog(@"%d",[[row soundArray] count]); Help please (soundArray is an NSMutableArray).

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • When is it better to use a method versus a property for a class definition?

    - by ccomet
    Partially related to an earlier question of mine, I have a system in which I have to store complex data as a string. Instead of parsing these strings as all kinds of separate objects, I just created one class that contains all of those objects, and it has some parser logic that will encode all properties into strings, or decode a string to get those objects. That's all fine and good. This question is not about the parser itself, but about where I should house the logic for the parser. Is it a better choice to put it as a property, or as a method? In the case of a property, say public string DataAsString, the get accessor would house the logic to encode all of the data into a string, while the set accessor would decode the input value and set all of the data in the class instance. It seems convenient because the input/output is indeed a string. In the case of a method, one method would be Encode(), which returns the encoded string. Then, either the constructor itself would house the logic for the decoding a string and require the string argument, or I write a Decode(string str) method which is called separately. In either case, it would be using a method instead of a property. So, is there a functional difference between these paths, in terms of the actual running of the code? Or are they basically equivalent and it then boils down to a choice of personal preference or which one looks better? And in that kind of question... which would look cleaner anyway?

    Read the article

  • returning autorelease NSString still causes memory leaks

    - by hookjd
    I have a simple function that returns an NSString after decoding it. I use it a lot throughout my application, and it appears to create a memory leak (according to "leaks" tool) every time I use it. Leaks tells me the problem is on the line where I alloc the NSString that I am going to return, even though I autorelease it. Here is the function: -(NSString *) decodeValue { NSString *newString; newString = [self stringByReplacingOccurrencesOfString:@"#" withString:@"$"]; NSData *stateData = [NSData dataWithBase64EncodedString:newString]; NSString *convertState = [[[NSString alloc] initWithData:stateData encoding:NSUTF8StringEncoding] autorelease]; return convertState; } My understanding of [autorelease] is that it should be used in exactly this way... where I want to hold onto the object just long enough to return it in my function and then let the object be autoreleased later. So I believe I can use this function through code like this without manually releasing anything: NSString *myDecodedString = [myString decodeValue]; But this process is reporting leaks and I don't understand how to change it to avoid the leaks. What am I doing wrong?

    Read the article

  • BWToolkit inclusion crashing

    - by Schroedinger
    Hey guys I'm using the latest version of XCode (3.2.2) and I've linked the framework using the tutorial. I was building my app and tested it and I get a BWToolkit exception on decoding. I've included the Framework in the frameworks and added it to the copy files stage. I even created a new dummy app including the framework and adding it to the copy files stage and it still crashes when I try to run. Any ideas? Do I need to include it somewhere in the app? I'm worried I've overlooked something really simple. 2010-04-13 14:14:24.540 BWTestFramework[7504:a0f] An uncaught exception was raised 2010-04-13 14:14:24.543 BWTestFramework[7504:a0f] *** -[NSKeyedUnarchiver decodeObjectForKey:]: cannot decode object of class (BWSplitView) 2010-04-13 14:14:24.545 BWTestFramework[7504:a0f] *** Terminating app due to uncaught exception 'NSInvalidUnarchiveOperationException', reason: '*** -[NSKeyedUnarchiver decodeObjectForKey:]: cannot decode object of class (BWSplitView)' *** Call stack at first throw: ( 0 CoreFoundation 0x00007fff84a77d24 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x00007fff82ba00f3 objc_exception_throw + 45 2 CoreFoundation 0x00007fff84a77b47 +[NSException raise:format:arguments:] + 103 3 CoreFoundation 0x00007fff84a77ad4 +[NSException raise:format:] + 148 4 Foundation 0x00007fff83804aa6 _decodeObjectBinary + 2427 5 Foundation 0x00007fff83805825 -[NSKeyedUnarchiver _decodeArrayOfObjectsForKey:] + 1229 6 Foundation 0x00007fff83805d65 -[NSArray(NSArray) initWithCoder:] + 462 7 Foundation 0x00007fff83804b1f _decodeObjectBinary + 2548 8 Foundation 0x00007fff83803f99 _decodeObject + 208 9 AppKit 0x00007fff8069fbfb -[NSView initWithCoder:] + 362 10 Foundation 0x00007fff83804b1f _decodeObjectBinary + 2548 11 Foundation 0x00007fff83803f99 _decodeObject + 208 12 AppKit 0x00007fff806adfbb -[NSWindowTemplate initWithCoder:] + 3824 13 Foundation 0x00007fff83804b1f _decodeObjectBinary + 2548 14 Foundation 0x00007fff83805825 -[NSKeyedUnarchiver _decodeArrayOfObjectsForKey:] + 1229 15 Foundation 0x00007fff83805268 -[NSSet(NSSet) initWithCoder:] + 447 16 Foundation 0x00007fff83804b1f _decodeObjectBinary + 2548 17 Foundation 0x00007fff83803f99 _decodeObject + 208 18 AppKit 0x00007fff8062fcde -[NSIBObjectData initWithCoder:] + 1983 19 Foundation 0x00007fff83804b1f _decodeObjectBinary + 2548 20 Foundation 0x00007fff83803f99 _decodeObject + 208 21 AppKit 0x00007fff8062f40d loadNib + 146 22 AppKit 0x00007fff8062e96d +[NSBundle(NSNibLoading) _loadNibFile:nameTable:withZone:ownerBundle:] + 248 23 AppKit 0x00007fff8062e7a5 +[NSBundle(NSNibLoading) loadNibNamed:owner:] + 326 24 AppKit 0x00007fff8062bd27 NSApplicationMain + 279 25 BWTestFramework 0x0000000100001891 main + 33 26 BWTestFramework 0x0000000100001868 start + 52 ) terminate called after throwing an instance of 'NSException' That's the crash report

    Read the article

  • Passing a URL as a URL parameter

    - by Andrea
    I am implementing OpenId login in a CakePHP application. At a certain point, I need to redirect to another action, while preserving the information about the OpenId identity, which is itself a URL (with GET parameters), for instance https://www.google.com/accounts/o8/id?id=31g2iy321i3y1idh43q7tyYgdsjhd863Es How do I pass this data? The first attempt would be function openid() { ... $this->redirect(array('controller' => 'users', 'action' => 'openid_create', $openid)); } but the obvious problem is that this completely messes up the way CakePHP parses URL parameters. I'd need to do either of the following: 1) encode the URL in a CakePHP friendly manner for passing it, and decoding it after that, or 2) pass the URL as a POST parameter but I don't know how to do this. EDIT: In response to comments, I should be more clear. I am using the OpenId component, and I have a working OpenId implementation. What I need to do is to link OpenId with an existing user system. When a new user logs in via OpenId, I ask for more details, and then create a new user with this data. The problem is that I have to keep the OpenId URL throughout this process.

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Removing HTML entities while preserving line breaks with JSoup

    - by shrodes
    I have been using JSoup to parse lyrics and it has been great until now, but have run into a problem. I can use Node.html() to return the full HTML of the desired node, which retains line breaks as such: Gl&oacute;andi augu, silfurn&aacute;tt <br />Bl&oacute;&eth; alv&ouml;ru, starir &aacute; <br />&Oacute;&eth;ur hundur er &iacute; v&iacute;gam&oacute;&eth;, &iacute; maga... m&eacute;r <br /> <br />Kolni&eth;ur gref, kvik sem dreg h&eacute;r <br />Kolni&eth;ur svart, hvergi bjart n&eacute; But has the unfortunate side-effect, as you can see, of retaining HTML entities and tags. However, if I use Node.text(), I can get a better looking result, free of tags and entities: Glóandi augu, silfurnátt Blóð alvöru, starir á Óður hundur er í vígamóð, í maga... mér Kolniður gref, kvik sem dreg hér Kolniður svart, Which has another unfortunate side-effect of removing the line breaks and compressing into a single line. Simply replacing <br /> from the node before calling Node.text() yields the same result, and it seems that that method is compressing the text onto a single line in the method itself, ignoring newlines. Is it possible to have the best of both worlds, and have tags and entities replaced correctly which preserving the line breaks, or is there another method or way of decoding entities and removing tags without having to replace them manually?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >