Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 45/52 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Altruistic network connection bandwidth estimation

    - by datenwolf
    Assume two peers Alice and Bob connected over a IP network. Alice and Bob are exchanging packets of lossy compressed data which are generated and to be consumes in real time (think a VoIP or video chat application). The service is designed to cope with as little bandwidth available, but relies on low latencies. Alice and Bob would mark their connection with an apropriate QoS profile. Alice and Bob want use a variable bitrate compression and would like to consume all of the leftover bandwidth available for the connection between them, but would voluntarily reduce the consumed bitrate depending on the state of the network. However they'd like to retain a stable link, i.e. avoid interruptions in their decoded data stream caused by congestion and the delay until the bandwidth got adjusted. However it is perfectly possible for them to loose a few packets. TL;DR: Alice and Bob want to implement a VoIP protocol from scratch, and are curious about bandwidth and congestion control. What papers and resources do you suggest for Alice and Bob to read? Mainly in the area of bandwidth estimation and congestion control.

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • With Go, how to append unknown number of byte into a vector and get a slice of bytes?

    - by Stephen Hsu
    I'm trying to encode a large number to a list of bytes(uint8 in Go). The number of bytes is unknown, so I'd like to use vector. But Go doesn't provide vector of byte, what can I do? And is it possible to get a slice of such a byte vector? I intends to implement data compression. Instead of store small and large number with the same number of bytes, I'm implements a variable bytes that uses less bytes with small number and more bytes with large number. My code can not compile, invalid type assertion: 1 package main 2 3 import ( 4 //"fmt" 5 "container/vector" 6 ) 7 8 func vbEncodeNumber(n uint) []byte{ 9 bytes := new(vector.Vector) 10 for { 11 bytes.Push(n % 128) 12 if n < 128 { 13 break 14 } 15 n /= 128 16 } 17 bytes.Set(bytes.Len()-1, bytes.Last().(byte)+byte(128)) 18 return bytes.Data().([]byte) // <- 19 } 20 21 func main() { vbEncodeNumber(10000) } I wish to writes a lot of such code into binary file, so I wish the func can return byte array. I haven't find a code example on vector.

    Read the article

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • Dynamically generated PDF files working in most readers except Adobe Reader

    - by Shane
    I'm trying to dynamically generate PDFs from user input, where I basically print the user input and overlay it on an existing PDF that I did not create. It works, with one major exception. Adobe Reader doesn't read it properly, on Windows or on Linux. QuickOffice on my phone doesn't read it either. So I thought I'd trace the path of me creating the files - 1 - Original PDF of background PDF 1.2 made with Adobe Distiller with the LZW encoding. I didn't make this. 2 - PDF of background PDF 1.4 made with Ghostscript. I used pdf2ps then ps2pdf on the above to strip LZW so that the reportlab and pyPDF libraries would recognize it. Note that this file looks "fuzzy," like a bad scan, in Adobe Reader, but looks fine in other readers. 3 - PDF of user-input text formatted to be combined with background PDF 1.3 made with Reportlab from user input. Opens properly and looks good in every reader I've tried. 4 - Finished PDF PDF 1.3 made from PyPDF's mergePage() function on 2 and 3. Does not open in: Adobe Reader for Windows Adobe Reader for Linux QuickOffice for Android Opens perfectly in: Google Docs' PDF viewer on the web evince for linux ghostscript viewer for linux Foxit reader for Windows Preview for Mac Are there known issues that I should know about? I don't know exactly what "flate" is, but from the internet I gather that it's some sort of open source alternative to LZW for PDF compression? Could that be causing my problem? If so, are there any libraries I could use to fix the cause in my code?

    Read the article

  • How to output KML by GAE

    - by Niklas R
    Hi I use KML for a google map where entities have a geopt.db coordinate and soft memory limit was exceeded with 213.465 MB after servicing 1 requests total. The log says /list.kml 200 13130ms 10211cpu_ms 4238api_cpu_ms The file list.kml which outputs about 455,7 KB is a template as follows <?xml version="1.0" encoding="UTF-8"?><kml xmlns="http:// www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http:// www.w3.org/2005/Atom"> <Document>{% for a in list %} <Placemark> <name> </name> <description> <![CDATA[<a href="http://{{host}}/{{a.key.id}}"> {{ a.title }} </a> <br/>{{a.text}}]]> </description> <Style> <IconStyle> <Icon> <href> http://www.google.com/intl/en_us/mapfiles/ms/icons/green-dot.png </href> </Icon> </IconStyle> </Style> <Point> <coordinates> {{a.geopt.lon|floatformat:2}},{{a.geopt.lat|floatformat:2}} </coordinates> </Point> </Placemark> {% endfor %} </Document> </kml> Is there a memory leak in the template or the python that passes the list variable? Can I improve using other template engine or other framework than default? Is kmz compression a good idea in this case? Thanks in advance for any suggestion where or how to change the code.

    Read the article

  • Gzip and subprocess' stdout in python

    - by pythonic metaphor
    I'm using python 2.6.4 and discovered that I can't use gzip with subprocess the way I might hope. This illustrates the problem: May 17 18:05:36> python Python 2.6.4 (r264:75706, Mar 10 2010, 14:41:19) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> import subprocess >>> fh = gzip.open("tmp","wb") >>> subprocess.Popen("echo HI", shell=True, stdout=fh).wait() 0 >>> fh.close() >>> [2]+ Stopped python May 17 18:17:49> file tmp tmp: data May 17 18:17:53> less tmp "tmp" may be a binary file. See it anyway? May 17 18:17:58> zcat tmp zcat: tmp: not in gzip format Here's what it looks like inside less HI ^_<8B>^H^Hh<C0><F1>K^B<FF>tmp^@^C^@^@^@^@^@^@^@^@^@ which looks like it put in the stdout as text and then put in an empty gzip file. Indeed, if I remove the "Hi\n", then I get this: May 17 18:22:34> file tmp tmp: gzip compressed data, was "tmp", last modified: Mon May 17 18:17:12 2010, max compression What is going on here?

    Read the article

  • Need to close a popup after javascript completes image upload then load new page

    - by Michael Robinson
    I got a problem. I have a image upload script that runs on a popup up when you select "upload button", after it is complete instead of closing it stays in the uploader.html window. I need it to close and go to a new page. Here is some of the xml that the script uses, can I change the "urlOnUploadSucess" line to close the popup and load a new page on the original page I came from? .....<upload preload="Preloading images:" upload="Uploading images to server:" prepare="Processing and image compression:" of="of" cancel="Cancel" start="Start" warning_empty_required_field="Warning! One of the required fields is empty!" confirm="Will be uploaded:"/> <urls urlToUpload="upload.php?" urlOnUploadSuccess="http://www.baublet.com/purchase.html" urlOnUploadFail="http://www.baublet.com/tryagain.html" urlUpdateFlashPlayer="http://www.baublet.com/flashalternative.html" useMessageBoxesAfterUpload="false" messageOnUploadSuccess="Images were successfully uploaded!" messageOnUploadFail="Error! Images failed to upload!" jsFunctionNameOnUpload=""/> ..... Thanks, Michael

    Read the article

  • iPhone AVAudioPlayer failed to find codec

    - by Anthony
    Hello, I am writing an app that downloads a wav file from a server and needs to play that file. The files use the mulaw codec with 2:1 compression. These wav files are dynamically created by a seperate process so there is no way for me to preconvert the files to a different format or codec, I need to be able to play them as is. I am using an AVAudioPlayer instance initialized as follows: NSURL *audioURL = [[NSURL alloc] initWithString:@"http://xxx.../file.wav"]; NSData *audioData = [[NSData alloc] initWithContentsOfURL:audioURL]; AVAudioPlayer *audio = [[AVAudioPlayer alloc] initWithData:audioData error:nil]; [audio play]; However, when the play method executes, I get the following Console Output when executing on the Simulator: AudioQueue codec policy 1: failed to find a codec of the requested type I also tried saving the downloaded data to a local file and using a file URL, however that yeilds the same results. The downloaded file does play fine on both Mac and Windows based desktop media players. The SDK docs state that the mulaw codec is supported on the iPhone, so I am unsure why it is failing to find it. Any assistance would be greatly appreciated. Thanks.

    Read the article

  • PowerShell PSCX Read-Archive: Cannot bind parameter... problem

    - by Robert
    I'm running across a problem I can't seem to wrap my head around using the Read-Archive cmdlet available via PowerShell Community Extensions (v2.0.3782.38614). Here is a cut down sample used to exhibit the problem I'm running into: $mainPath = "p:\temp" $dest = Join-Path $mainPath "ps\CenCodes.zip" Read-Archive -Path $dest -Format zip Running the above produces the following error: Read-Archive : Cannot bind parameter 'Path'. Cannot convert the "p:\temp\ps\CenCodes.zip" value of type "System.String" to type "Pscx.IO.PscxPathInfo". At line:3 char:19 + Read-Archive -Path <<<< $dest -Format zip + CategoryInfo : InvalidArgument: (:) [Read-Archive], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Pscx.Commands.IO.Compression.ReadArchiveCommand If I do not use Join-Path to build the path passed to Read-Archive it works, as in this example: $mainPath = "p:\temp" $path = $mainPath + "\ps\CenCodes.zip" Read-Archive -Path $path -Format zip Output from above: ZIP Folder: CenCodes.zip#\ Index LastWriteTime Size Ratio Name ----- ------------- ---- ----- ---- 0 6/17/2010 2:03 AM 3009106 24.53 % CenCodes.xls Even more confusing is if I compare the two variables passed as the Path argument in the two Read-Archive samples above, they seem identical: This... Write-Host "dest=$dest" Write-Host "path=$path" Write-Host ("path -eq dest is " + ($dest -eq $path).ToString()) Outputs... dest=p:\temp\ps\CenCodes.zip path=p:\temp\ps\CenCodes.zip path -eq dest is True Anyone have any ideas as to why the first sample gripes but the second one works fine?

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • How can I overlay images over one another in Java?

    - by Sam
    So I have been posting all over and have yet to get a solid answer: I have created an image resizing class, with a crop method. The cropping works great. The issue that I am having is the background color that I specify in the drawImage function of Graphics is not working correctly. It defaults to black as the background regardless of what I supply (in this case Color.WHITE). Also, the overlaying image or top most image (comes from a file) is being inverted (I think it is) or otherwise discolored. Just so you can conceptualize this a little bit better, I am taking a jpeg and overlaying it on top of a new BufferedImage, the new buffered image's background is not being set. Here is the code below that I am working with: public void Crop(int Height, int Width, int SourceX, int SourceY) throws Exception { //output height and width int OutputWidth = this.OutputImage.getWidth(); int OutputHeight = this.OutputImage.getHeight(); //create output streams ByteArrayOutputStream MyByteArrayOutputStream = new ByteArrayOutputStream(); MemoryCacheImageOutputStream MyMemoryCacheImageOutputStream = new MemoryCacheImageOutputStream(MyByteArrayOutputStream); //Create a new BufferedImage BufferedImage NewImage = new BufferedImage(Width, Height, BufferedImage.TYPE_INT_RGB); Graphics MyGraphics = NewImage.createGraphics(); MyGraphics.drawImage(this.OutputImage, -SourceX, -SourceY, OutputWidth, OutputHeight, Color.WHITE, null); // Get Writer and set compression Iterator MyIterator = ImageIO.getImageWritersByFormatName("png"); if (MyIterator.hasNext()) { //get image writer ImageWriter MyImageWriter = (ImageWriter)MyIterator.next(); //get params ImageWriteParam MyImageWriteParam = MyImageWriter.getDefaultWriteParam(); //set outputstream MyImageWriter.setOutput(MyMemoryCacheImageOutputStream); //create new ioimage IIOImage MyIIOImage = new IIOImage(NewImage, null, null); //write new image MyImageWriter.write(null, MyIIOImage, MyImageWriteParam); } //convert output stream back to inputstream ByteArrayInputStream MyByteArrayInputStream = new ByteArrayInputStream(MyByteArrayOutputStream.toByteArray()); MemoryCacheImageInputStream MyMemoryCacheImageInputStream = new MemoryCacheImageInputStream(MyByteArrayInputStream); //resassign as a buffered image this.OutputImage = ImageIO.read(MyMemoryCacheImageInputStream); }

    Read the article

  • Why does iOS 5 fail to connect to a server running JDK 1.6, but not JDK 1.5

    - by KC Baltz
    We have a Java Socket Server listening on an SSLSocket (port 443) and an iOS application that connects with it. When running on iOS 5.1, the application stopped working when we upgraded the Java version of the server from JDK 1.5 to 1.6 (or 1.7). The app connects just fine to JDK 5 and 6 when running on iOS 6. The iOS app is reporting an error: -9809 = errSSLCrypto. On the Java side, we get javax.net.ssl.SSLException: Received fatal alert: close_notify. On the Java server side, we have enabled all the available cipher suites. On the client side we have tested enabling several different suites, although we have yet to complete a test involving each one individually enabled. Right now, it is failing when we use TLS_DH_anon_WITH_AES_128_CBC_SHA although it has failed with others and we are starting to think it's not the suite. Here is the debug output. It makes it all the way to ServerHelloDone and then fails shortly thereafter: Is secure renegotiation: false [Raw read]: length = 5 0000: 16 03 03 00 41 ....A [Raw read]: length = 65 0000: 01 00 00 3D 03 03 50 83 1E 0B 56 19 25 65 C8 F2 ...=..P...V.%e.. 0010: AF 02 AD 48 FE E2 92 CF B8 D7 A6 A3 EA C5 FF 5D ...H...........] 0020: 74 0F 1B C1 99 18 00 00 08 00 FF 00 34 00 1B 00 t...........4... 0030: 18 01 00 00 0C 00 0D 00 08 00 06 05 01 04 01 02 ................ 0040: 01 . URT-, READ: Unknown-3.3 Handshake, length = 65 *** ClientHello, Unknown-3.3 RandomCookie: GMT: 1333992971 bytes = { 86, 25, 37, 101, 200, 242, 175, 2, 173, 72, 254, 226, 146, 207, 184, 215, 166, 163, 234, 197, 255, 93, 116, 15, 27, 193, 153, 24 } Session ID: {} Cipher Suites: [TLS_EMPTY_RENEGOTIATION_INFO_SCSV, TLS_DH_anon_WITH_AES_128_CBC_SHA, SSL_DH_anon_WITH_3DES_EDE_CBC_SHA, SSL_DH_anon_WITH_RC4_128_MD5] Compression Methods: { 0 } Unsupported extension signature_algorithms, data: 00:06:05:01:04:01:02:01 *** [read] MD5 and SHA1 hashes: len = 65 0000: 01 00 00 3D 03 03 50 83 1E 0B 56 19 25 65 C8 F2 ...=..P...V.%e.. 0010: AF 02 AD 48 FE E2 92 CF B8 D7 A6 A3 EA C5 FF 5D ...H...........] 0020: 74 0F 1B C1 99 18 00 00 08 00 FF 00 34 00 1B 00 t...........4... 0030: 18 01 00 00 0C 00 0D 00 08 00 06 05 01 04 01 02 ................ 0040: 01 . %% Created: [Session-1, TLS_DH_anon_WITH_AES_128_CBC_SHA] *** ServerHello, TLSv1 RandomCookie: GMT: 1333992972 bytes = { 100, 3, 56, 153, 7, 2, 251, 64, 41, 32, 66, 240, 227, 181, 55, 190, 2, 237, 146, 0, 73, 119, 70, 0, 160, 9, 28, 207 } Session ID: {80, 131, 30, 12, 241, 73, 52, 38, 46, 41, 237, 226, 199, 246, 156, 45, 3, 247, 182, 43, 223, 8, 49, 169, 188, 63, 160, 41, 102, 199, 50, 190} Cipher Suite: TLS_DH_anon_WITH_AES_128_CBC_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** Cipher suite: TLS_DH_anon_WITH_AES_128_CBC_SHA *** Diffie-Hellman ServerKeyExchange DH Modulus: { 233, 230, 66, 89, 157, 53, 95, 55, 201, 127, 253, 53, 103, 18, 11, 142, 37, 201, 205, 67, 233, 39, 179, 169, 103, 15, 190, 197, 216, 144, 20, 25, 34, 210, 195, 179, 173, 36, 128, 9, 55, 153, 134, 157, 30, 132, 106, 171, 73, 250, 176, 173, 38, 210, 206, 106, 34, 33, 157, 71, 11, 206, 125, 119, 125, 74, 33, 251, 233, 194, 112, 181, 127, 96, 112, 2, 243, 206, 248, 57, 54, 148, 207, 69, 238, 54, 136, 193, 26, 140, 86, 171, 18, 122, 61, 175 } DH Base: { 48, 71, 10, 213, 160, 5, 251, 20, 206, 45, 157, 205, 135, 227, 139, 199, 209, 177, 197, 250, 203, 174, 203, 233, 95, 25, 10, 167, 163, 29, 35, 196, 219, 188, 190, 6, 23, 69, 68, 64, 26, 91, 44, 2, 9, 101, 216, 194, 189, 33, 113, 211, 102, 132, 69, 119, 31, 116, 186, 8, 77, 32, 41, 216, 60, 28, 21, 133, 71, 243, 169, 241, 162, 113, 91, 226, 61, 81, 174, 77, 62, 90, 31, 106, 112, 100, 243, 22, 147, 58, 52, 109, 63, 82, 146, 82 } Server DH Public Key: { 8, 60, 59, 13, 224, 110, 32, 168, 116, 139, 246, 146, 15, 12, 216, 107, 82, 182, 140, 80, 193, 237, 159, 189, 87, 34, 18, 197, 181, 252, 26, 27, 94, 160, 188, 162, 30, 29, 165, 165, 68, 152, 11, 204, 251, 187, 14, 233, 239, 103, 134, 168, 181, 173, 206, 151, 197, 128, 65, 239, 233, 191, 29, 196, 93, 80, 217, 55, 81, 240, 101, 31, 119, 98, 188, 211, 52, 146, 168, 127, 127, 66, 63, 111, 198, 134, 70, 213, 31, 162, 146, 25, 178, 79, 56, 116 } Anonymous *** ServerHelloDone [write] MD5 and SHA1 hashes: len = 383 0000: 02 00 00 4D 03 01 50 83 1E 0C 64 03 38 99 07 02 ...M..P...d.8... 0010: FB 40 29 20 42 F0 E3 B5 37 BE 02 ED 92 00 49 77 .@) B...7.....Iw 0020: 46 00 A0 09 1C CF 20 50 83 1E 0C F1 49 34 26 2E F..... P....I4&. 0030: 29 ED E2 C7 F6 9C 2D 03 F7 B6 2B DF 08 31 A9 BC ).....-...+..1.. 0040: 3F A0 29 66 C7 32 BE 00 34 00 00 05 FF 01 00 01 ?.)f.2..4....... 0050: 00 0C 00 01 26 00 60 E9 E6 42 59 9D 35 5F 37 C9 ....&.`..BY.5_7. 0060: 7F FD 35 67 12 0B 8E 25 C9 CD 43 E9 27 B3 A9 67 ..5g...%..C.'..g 0070: 0F BE C5 D8 90 14 19 22 D2 C3 B3 AD 24 80 09 37 ......."....$..7 0080: 99 86 9D 1E 84 6A AB 49 FA B0 AD 26 D2 CE 6A 22 .....j.I...&..j" 0090: 21 9D 47 0B CE 7D 77 7D 4A 21 FB E9 C2 70 B5 7F !.G...w.J!...p.. 00A0: 60 70 02 F3 CE F8 39 36 94 CF 45 EE 36 88 C1 1A `p....96..E.6... 00B0: 8C 56 AB 12 7A 3D AF 00 60 30 47 0A D5 A0 05 FB .V..z=..`0G..... 00C0: 14 CE 2D 9D CD 87 E3 8B C7 D1 B1 C5 FA CB AE CB ..-............. 00D0: E9 5F 19 0A A7 A3 1D 23 C4 DB BC BE 06 17 45 44 ._.....#......ED 00E0: 40 1A 5B 2C 02 09 65 D8 C2 BD 21 71 D3 66 84 45 @.[,..e...!q.f.E 00F0: 77 1F 74 BA 08 4D 20 29 D8 3C 1C 15 85 47 F3 A9 w.t..M ).<...G.. 0100: F1 A2 71 5B E2 3D 51 AE 4D 3E 5A 1F 6A 70 64 F3 ..q[.=Q.M>Z.jpd. 0110: 16 93 3A 34 6D 3F 52 92 52 00 60 08 3C 3B 0D E0 ..:4m?R.R.`.<;.. 0120: 6E 20 A8 74 8B F6 92 0F 0C D8 6B 52 B6 8C 50 C1 n .t......kR..P. 0130: ED 9F BD 57 22 12 C5 B5 FC 1A 1B 5E A0 BC A2 1E ...W"......^.... 0140: 1D A5 A5 44 98 0B CC FB BB 0E E9 EF 67 86 A8 B5 ...D........g... 0150: AD CE 97 C5 80 41 EF E9 BF 1D C4 5D 50 D9 37 51 .....A.....]P.7Q 0160: F0 65 1F 77 62 BC D3 34 92 A8 7F 7F 42 3F 6F C6 .e.wb..4....B?o. 0170: 86 46 D5 1F A2 92 19 B2 4F 38 74 0E 00 00 00 .F......O8t.... URT-, WRITE: TLSv1 Handshake, length = 383 [Raw write]: length = 388 0000: 16 03 01 01 7F 02 00 00 4D 03 01 50 83 1E 0C 64 ........M..P...d 0010: 03 38 99 07 02 FB 40 29 20 42 F0 E3 B5 37 BE 02 .8....@) B...7.. 0020: ED 92 00 49 77 46 00 A0 09 1C CF 20 50 83 1E 0C ...IwF..... P... 0030: F1 49 34 26 2E 29 ED E2 C7 F6 9C 2D 03 F7 B6 2B .I4&.).....-...+ 0040: DF 08 31 A9 BC 3F A0 29 66 C7 32 BE 00 34 00 00 ..1..?.)f.2..4.. 0050: 05 FF 01 00 01 00 0C 00 01 26 00 60 E9 E6 42 59 .........&.`..BY 0060: 9D 35 5F 37 C9 7F FD 35 67 12 0B 8E 25 C9 CD 43 .5_7...5g...%..C 0070: E9 27 B3 A9 67 0F BE C5 D8 90 14 19 22 D2 C3 B3 .'..g......."... 0080: AD 24 80 09 37 99 86 9D 1E 84 6A AB 49 FA B0 AD .$..7.....j.I... 0090: 26 D2 CE 6A 22 21 9D 47 0B CE 7D 77 7D 4A 21 FB &..j"!.G...w.J!. 00A0: E9 C2 70 B5 7F 60 70 02 F3 CE F8 39 36 94 CF 45 ..p..`p....96..E 00B0: EE 36 88 C1 1A 8C 56 AB 12 7A 3D AF 00 60 30 47 .6....V..z=..`0G 00C0: 0A D5 A0 05 FB 14 CE 2D 9D CD 87 E3 8B C7 D1 B1 .......-........ 00D0: C5 FA CB AE CB E9 5F 19 0A A7 A3 1D 23 C4 DB BC ......_.....#... 00E0: BE 06 17 45 44 40 1A 5B 2C 02 09 65 D8 C2 BD 21 ...ED@.[,..e...! 00F0: 71 D3 66 84 45 77 1F 74 BA 08 4D 20 29 D8 3C 1C q.f.Ew.t..M ).<. 0100: 15 85 47 F3 A9 F1 A2 71 5B E2 3D 51 AE 4D 3E 5A ..G....q[.=Q.M>Z 0110: 1F 6A 70 64 F3 16 93 3A 34 6D 3F 52 92 52 00 60 .jpd...:4m?R.R.` 0120: 08 3C 3B 0D E0 6E 20 A8 74 8B F6 92 0F 0C D8 6B .<;..n .t......k 0130: 52 B6 8C 50 C1 ED 9F BD 57 22 12 C5 B5 FC 1A 1B R..P....W"...... 0140: 5E A0 BC A2 1E 1D A5 A5 44 98 0B CC FB BB 0E E9 ^.......D....... 0150: EF 67 86 A8 B5 AD CE 97 C5 80 41 EF E9 BF 1D C4 .g........A..... 0160: 5D 50 D9 37 51 F0 65 1F 77 62 BC D3 34 92 A8 7F ]P.7Q.e.wb..4... 0170: 7F 42 3F 6F C6 86 46 D5 1F A2 92 19 B2 4F 38 74 .B?o..F......O8t 0180: 0E 00 00 00 .... [Raw read]: length = 5 0000: 15 03 01 00 02 ..... [Raw read]: length = 2 0000: 02 00 .. URT-, READ: TLSv1 Alert, length = 2 URT-, RECV TLSv1 ALERT: fatal, close_notify URT-, called closeSocket() URT-, handling exception: javax.net.ssl.SSLException: Received fatal alert: close_notify FYI, this works in iOS 6.0

    Read the article

  • What image processing Library should I use

    - by Swippen
    I have been reading What is the best image manipulation library? And tried a few libraries and are now looking for inputs on what is the best for our need. I will start by describing our current setting and problems. We have a system that needs to resize and crop a large amount of images from big original images. We handle 50 000+ images every day on 2 powerfull servers. Today we use ImageGlue from WebSupergoo but we don't like it at all, it is slow and hangs the service now and then (Its in another unanswered stack overflow question). We have a threaded windows service that uses Microsoft ThreadPool to resize as much as possible on the 8 core machines. I have tried AForge and it went very well it was loads faster and never crashed or anything. But I had problems with quality on a few images. This due to what algorithms I used ofc so can be tweaked. But want to widen our eyes to see if thats the right way to go. so: It needs to be c# .net and run in a windows service. (Since we wont change the rest of the service only image handling) It needs to handle threaded environment well. We have a great need of it being fast since today its too slow. But we also want good quality and small filesize since the images are later displayed on webpage with loads of visitors and needs good quality. So we have a lot of demands on ability to get god quality at a fast pace, and also secondary keep filesizes lowered even if that can be adjusted with compression a bit. Any comments or suggestions on what library to use?

    Read the article

  • Main Function Error C++

    - by Arjun Nayini
    I have this main function: #ifndef MAIN_CPP #define MAIN_CPP #include "dsets.h" using namespace std; int main(){ DisjointSets s; s.uptree.addelements(4); for(int i=0; i<s.uptree.size(); i++) cout <<uptree.at(i) << endl; return 0; } #endif And the following class: class DisjointSets { public: void addelements(int x); int find(int x); void setunion(int x, int y); private: vector<int> uptree; }; #endif My implementation is this: void DisjointSets::addelements(int x){ for(int i=0; i<x; i++) uptree.push_back(-1); } //Given an int this function finds the root associated with that node. int DisjointSets::find(int x){ //need path compression if(uptree.at(x) < 0) return x; else return find(uptree.at(x)); } //This function reorders the uptree in order to represent the union of two //subtrees void DisjointSets::setunion(int x, int y){ } Upon compiling main.cpp (g++ main.cpp) I'm getting these errors: dsets.h: In function \u2018int main()\u2019: dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:9: error: within this context main.cpp:9: error: \u2018class std::vector \u2019 has no member named \u2018addelements\u2019 dsets.h:25: error: \u2018std::vector DisjointSets::uptree\u2019 is private main.cpp:10: error: within this context main.cpp:11: error: \u2018uptree\u2019 was not declared in this scope I'm not sure exactly whats wrong. Any help would be appreciated.

    Read the article

  • jQuery/ajax working on IIS5.1 but not IIS6

    - by Mikejh99
    I'm running a weird issue here. I have code that makes jquery ajax calls to a web service and dynamically adds controls using jquery. Everything works fine on my dev machine running IIS 5.1, but not when deployed to IIS 6. I'm using VS2010/ASP.Net 4.0, C#, jQuery 1.4.2 and jQuery UI 1.8.1. I'm using the same browser for each. It partially works though. The code will add the controls to the page, but they aren't visible until I click them (they aren't visible though). I thought this was a css issue, but the styles are there too. The ajax calls look like this: $.ajax({ url: "/WebServices/AssetManager.asmx/Assets", type: "POST", datatype: "json", async: false, data: "{'q':'" + req.term + "', 'type':'Condition'}", contentType: "application/javascript; charset=utf-8", success: function (data) { res($.map(data.d, function (item) { return { label: item.Name, value: item.Name, id: item.Id, datatype: item.DataType } })) } }) Changing the content-type makes the autocomplete fail. I've quadruple checked and all the paths are correct, there is no document footer enabled in IIS, and I'm not using IIS compression. Any idea why the page will display and work properly in IIS 5 but only partially in IIS 6? (If it failed completely, that'd make more sense!). Is it a jQuery or CSS issue?

    Read the article

  • SSL over TDS, SQL Server 2005 Express

    - by reuvenab
    I capture packets sent/received by Win Xp machine when connecting to SQL Server 2005 Express using TLS encryption. Server and Client exchange Hello messages Server and Client send ChangeCipherSpec message Then Server and Client server send strange message that is not described in TLS protocol What is the message and if SSL over TDS is standard compliant at all? Server side capture: 16 **SSL Handshake** 03 01 00 4a 02 ServerHello 00 00 46 03 01 4b dd 68 59 GMT 33 13 37 98 10 5d 57 9d ff 71 70 dc d6 6f 9e 2c Random[00..13] cb 96 c0 2e b3 2f 9b 74 67 05 cc 96 Random[14..27] 20 72 26 00 00 0f db 7f d9 b0 51 c2 4f cd 81 4c Session ID 3f e3 d2 d1 da 55 c0 fe 9b 56 b7 6f 70 86 fe bb Session ID 54 Session ID 00 04 Cipher Suite 00 Compression 14 03 01 00 01 01 **ChangeCipherSpec** 16 03 01 ???? Finished ??? 00 20 d0 da cc c4 36 11 43 ff 22 25 8a e1 38 2b ???? ??? 71 ce f3 59 9e 35 b0 be b2 4b 1d c5 21 21 ce 41 ???? ??? 8e 24

    Read the article

  • Ant: use include and exclude together

    - by Ken
    OK, this seems like it should be really simple. I'm using Apache Ant 1.8, and I have a target which does: <delete file="output/program.tar.bz2"/> <tar basedir="input" destfile="output/program.tar.bz2" compression="bzip2"> <tarfileset dir="input"> <include name="goodfolder1/**"/> <include name="goodfolder2/**"/> <exclude name="**/badfile"/> <exclude name="**/*.badext"/> </tarfileset> </tar> I want it to make a .tar.bz2 of input/goodfolder1 and input/goodfolder2, excluding files named "badfile", and excluding files with extension ".badext". It's giving me a .tar.bz2, but it's including badfile and *.badext -- the excludes seem to be ignored. The order of include/exclude doesn't seem to make a difference. I tried wrapping the includes/excludes in a (the docs say it's implicit?), but it made no difference. I'm sure there's something simple I'm missing, since the manual has a very similar example, though in a somewhat different context. EDIT: It looks like it could be related to the dir="input" attribute: it's adding everything in "input", and then adding everything in the tarfileset to that. Files I want appear twice in the program.tar.bz2, but files that are excluded only appear once. But dir is mandatory, and I don't see how this is different from the examples in the manual.

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • Packing an exe + dll into one executable (not .NET)

    - by Bluebird75
    Hi, Is anybody aware of a program that can pack several DLL and a .EXE into one executable. I am not talking about .NET case here, I am talking about general DLLs, some of which I generate in C++, some of others are external DLL I have no control over. My specific case is a python program packaged with py2exe, where I would like to "hide" the other DLL by packing them. The question is general enough though. The things that had a look at: ILMerge: specific to .NET NETZ: specific to .NET UPX: does DLL compression but not multiple DLL + EXE packing FileJoiner: Almost got it. It can pack executable + anything into one exe but when opened, it will launch the default opener for every file that was packed. So, if the user user dlldepend installed, it will launch it (becaues that's the default dll opener). Maybe that's not possible ? Summary of the answers: DLL opening is managed by the OS, so packing DLL into executable means that at some point, they need to be extracted to a place where the OS can find them. No magic bullet. So, what I want is not possible. Unless... We change something in the OS. Thanks Conrad for pointing me to ThinInstall, which virtualise the application and the OS loading mechanism. With ThinInstall, it is possible to pack everything in one exe (DLL, registry settings, ...).

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • Symbols (pdb) for native dll are not loaded due to post build step

    - by sean e
    I have a native release dll that is built with symbols. There is a post build step that modifies the dll. The post build step does some compression and probably appends some data. The pdb file is still valid however neither WinDbg nor Visual Studio 2008 will load the symbols for the dll after the post build step. What bits in either the pdb file or the dll do we need to modify to get either WinDbg or Visual Studio to load the symbols when it loads a dump in which our release dll is referenced? Is it filesize that matters? A checksum or hash? A timestamp? Modify the dump? or modify the pdb? modify the dll before it is shipped? (We know the pdb is valid because we are able to use it to manually get symbol names for addresses in dump callstacks that reference the released dll. It's just a total pain in the *ss do it by hand for every address in a callstack in all the threads.)

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >