Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 181/799 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Windows 7 System Image Restore to SSD - SSD not detected?

    - by steviey
    I recently bought a new SSD (OCZ Vertex 2 120GB) as a replacement for my laptop HDD . I created a System Image on an external drive using the Win 7 backup and restore tool and then restored this image to the SSD using the Win 7 disk. AHCI is enabled in Win 7 and the BIOS, but it seems that windows is not detecting the disk as an SSD because, in the defragger, the disk is still available for defragging. Prefetch and Superfetch were also still enabled. I have manually disabled scheduled defragging, prefetch and superfetch. I have also checked that TRIM is enabled using: fsutil behavior query DisableDeleteNotify and the result is '0' (it is). Performance seems ok, but not quite as fast as I expected. Was restoring from a win7 generated system image a mistake? Am I getting optimal SSD performance?

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Trouble loading an ISO

    - by crocyson
    I've created a boot disk and ISO image with paragon. I boot up the Virtual pc with the disk in and I get to my recovery page. When I am supposed to select an ISO image? My Virtual PC doesn't acknowledge any of my physical hard drives on my host PC. I've also tried copy/paste the files into my datastore and they do not show up. I've tried the option during the set up to start with the ISO but again I am not able to browse to my external hard drive that I have stored the ISO on. VMWare will acknowledge it and say that it is connected but I can't browse to it. Am I doing something wrong? I created the back up disk and ISO image on the external with paragon.

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • recovering vm data from crashed XenServer install

    - by user58983
    We're using xenserver 5.6 from citrix to run our virtual machines. The box is old and creaky, and the hard drive went. (We had a super limited budget of $0 at the time). Turns out the only machine on there we actually needed, was the only one not backed up... anyway, we've replace the hard drive, and reinstalled xensever. however while trying to find our old vm disk images, i have no idea where they sit, and i'm beginning to think they're on a lvm partition? is there anyway to access the disk images from the old vm's on this hard disk, which is attached to the xenserver machine via a usb-sata converter?

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • grub boot failed after upgrading to ubuntu 12.04 LTS

    - by user138021
    no such disk error occured. I tried to format and reinstall 12.04, the problem remained. I also repaired with boot-repair, the problem remained. http://paste.ubuntu.com/1224005/ is url of the details. server is dell R710, the bios is set to uefi and disk is raid0 gpt. I typed commands in grub rescue: ls (hd0, gpt2), `no such partition' is shown set, `prefix=((null),gpt2)/grub' is shown I don't know why /efi/ubuntu/grubx64.efi doesn't recognize disk. another strange thing is that there is only 1 file in /efi/ubuntu

    Read the article

  • OS X Leopard (10.5) Server -- Unwanted Emails

    - by David
    Our OS X Leopard (10.5) server is sending out an email EVERY DAY for a Disk Full Notification on an external HDD we use for Time Machine backups (which manages it's own disk space, so the disk full notifications are useless). In the Server Admin app, under Settings Notifications, there's this: which is clearly disabled. The email header and message: http://pastebin.com/V5PZ3Phk I've been told to check the mail.log, but it's mostly empty (except for a few logs in 2011 saying "Postfix mail system is not running". I've also been told to check the crontabs, but I'm not sure how to do that on OS X (or any OS for that matter). Can someone explain how to do this? Also, if there's any other suggestions on places/things/etc to look at/for, my ears are all open!

    Read the article

  • Unable to read DVD on laptop

    - by usabilitest
    I have a DVD disk that I created on an HP laptop. I believe it was setup to be accessed as a USB device. I do not have that laptop any longer and I still would like to access the files on that DVD from my current DELL laptop but it can not open the disk. My guess, the session nwas not closed on the disk and for some reason the new laptop can not read it. My question is, is there any way I can access the files using new laptop, of do I need to find similar HP to try to access the files and possibly close the session? Are there any utilities out there that could help me? Thanks.

    Read the article

  • Check SSD stability before returning it as broken?

    - by data_jepp
    I have the Transcend SSD 720 256GB 2,5" 7mm disk. And randomly I get disk IO in utorrent and large files copying to external will fail. I have removed it from my laptop since I need max stability. I use my laptop on exams. How can I really bench/make-it-sweat whilst connected to my desktop? I want to re-create some of the fails as I need proof if I am to return it. Firmware is updated. BIOS on laptop is updated. I could just completely nul the disk and then check for bad sectors, but is there anything else I could use for testing? Edit: It has 7 bad sector replacement operations. What does it mean?

    Read the article

  • Trouble determining proper decoding of a REST response from an ArcGIS REST service using IHttpModule

    - by Ryan Taylor
    First a little background on what I am trying to achieve. I have an application that is utilizing REST services served by ArcGIS Server and IIS7. The REST services return data in one of several different formats. I am requesting a JSON response. I want to be able to modify the response (remove or add parameters) before the response is sent to the client. However, I am having difficulty converting the stream to a string that I can modify. To that end, I have implemented the following code in order to try to inspect the stream. SecureModule.cs using System; using System.Web; namespace SecureModuleTest { public class SecureModule : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(OnBeginRequest); } public void Dispose() { } public void OnBeginRequest(object sender, EventArgs e) { HttpApplication application = (HttpApplication) sender; HttpContext context = application.Context; HttpRequest request = context.Request; HttpResponse response = context.Response; response.Filter = new ServicesFilter(response.Filter); } } } ServicesFilter.cs using System; using System.IO; using System.Text; namespace SecureModuleTest { class ServicesFilter : MemoryStream { private readonly Stream _outputStream; private StringBuilder _content; public ServicesFilter(Stream output) { _outputStream = output; _content = new StringBuilder(); } public override void Write(byte[] buffer, int offset, int count) { _content.Append(Encoding.UTF8.GetString(buffer, offset, count)); using (TextWriter textWriter = new StreamWriter(@"C:\temp\content.txt", true)) { textWriter.WriteLine(String.Format("Buffer: {0}", _content.ToString())); textWriter.WriteLine(String.Format("Length: {0}", buffer.Length)); textWriter.WriteLine(String.Format("Offset: {0}", offset)); textWriter.WriteLine(String.Format("Count: {0}", count)); textWriter.WriteLine(""); textWriter.Close(); } // Modify response _outputStream.Write(buffer, offset, count); } } } The module is installed in the /ArcGIS/rest/ virtual directory and is executed via the following GET request. http://localhost/ArcGIS/rest/services/?f=json&pretty=true The web page displays the expected response, however, the text file tells a very different (encoded?) story. Expect Response {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } Text File Contents Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q? Length: 4096 Offset: 0 Count: 168 Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???!P Length: 4096 Offset: 0 Count: 11 Interestingly, Fiddler depicts a similar picture. Fiddler Request GET http://localhost/ArcGIS/rest/services/?f=json&pretty=true HTTP/1.1 Host: localhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.70 Safari/533.4 Referer: http://localhost/ArcGIS/rest/services Cache-Control: no-cache Pragma: no-cache Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: a=mWz_JFOusuGPnS3w5xx1BSUuyKGB3YZo92Dy2SUntP2MFWa8MaVq6a4I_IYBLKuefXDZANQMeqvxdGBgQoqTKz__V5EQLHwxmKlUNsaK7do. Fiddler Response - Before Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 Content-Encoding: gzip ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 179 ????????`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???! P??? Fiddler Response - After Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 80 {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } I think that the problem may be a result of compression and/or chunking of data (this might be why I am receiving two calls to ServicesFilter.Write(...), however, I have not yet been able to solve the issue. How might I decode, unzip, and otherwise convert the byte stream into the string I know it should be for modification by my filter?

    Read the article

  • Why does async BeginReceiveFrom never time out on a raw socket?

    - by James Hugard
    Writing an asynchronous Ping using Raw Sockets in F#, to enable parallel requests using as few threads as possible. Not using "System.Net.NetworkInformation.Ping", because it appears to allocate one thread per request. Am also interested in using F# async workflows. The synchronous version below correctly times out when the target host does not exist/respond, but the asynchronous version hangs. Both work when the host does respond. Not sure if this is a .NET issue, or an F# one... Any ideas? (note: the process must run as Admin to allow Raw Socket access) This throws a timeout: let result = Ping.Ping ( IPAddress.Parse( "192.168.33.22" ), 1000 ) However, this hangs: let result = Ping.AsyncPing ( IPAddress.Parse( "192.168.33.22" ), 1000 ) |> Async.RunSynchronously Here's the code... module Ping open System open System.Net open System.Net.Sockets open System.Threading //---- ICMP Packet Classes type IcmpMessage (t : byte) = let mutable m_type = t let mutable m_code = 0uy let mutable m_checksum = 0us member this.Type with get() = m_type member this.Code with get() = m_code member this.Checksum = m_checksum abstract Bytes : byte array default this.Bytes with get() = [| m_type m_code byte(m_checksum) byte(m_checksum >>> 8) |] member this.GetChecksum() = let mutable sum = 0ul let bytes = this.Bytes let mutable i = 0 // Sum up uint16s while i < bytes.Length - 1 do sum <- sum + uint32(BitConverter.ToUInt16( bytes, i )) i <- i + 2 // Add in last byte, if an odd size buffer if i <> bytes.Length then sum <- sum + uint32(bytes.[i]) // Shuffle the bits sum <- (sum >>> 16) + (sum &&& 0xFFFFul) sum <- sum + (sum >>> 16) sum <- ~~~sum uint16(sum) member this.UpdateChecksum() = m_checksum <- this.GetChecksum() type InformationMessage (t : byte) = inherit IcmpMessage(t) let mutable m_identifier = 0us let mutable m_sequenceNumber = 0us member this.Identifier = m_identifier member this.SequenceNumber = m_sequenceNumber override this.Bytes with get() = Array.append (base.Bytes) [| byte(m_identifier) byte(m_identifier >>> 8) byte(m_sequenceNumber) byte(m_sequenceNumber >>> 8) |] type EchoMessage() = inherit InformationMessage( 8uy ) let mutable m_data = Array.create 32 32uy do base.UpdateChecksum() member this.Data with get() = m_data and set(d) = m_data <- d this.UpdateChecksum() override this.Bytes with get() = Array.append (base.Bytes) (this.Data) //---- Synchronous Ping let Ping (host : IPAddress, timeout : int ) = let mutable ep = new IPEndPoint( host, 0 ) let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let mutable buffer = packet.Bytes try if socket.SendTo( buffer, ep ) <= 0 then raise (SocketException()) buffer <- Array.create (buffer.Length + 20) 0uy let mutable epr = ep :> EndPoint if socket.ReceiveFrom( buffer, &epr ) <= 0 then raise (SocketException()) finally socket.Close() buffer //---- Entensions to the F# Async class to allow up to 5 paramters (not just 3) type Async with static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction) static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction) //---- Extensions to the Socket class to provide async SendTo and ReceiveFrom type System.Net.Sockets.Socket with member this.AsyncSendTo( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginSendTo, this.EndSendTo ) member this.AsyncReceiveFrom( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginReceiveFrom, (fun asyncResult -> this.EndReceiveFrom(asyncResult, remoteEP) ) ) //---- Asynchronous Ping let AsyncPing (host : IPAddress, timeout : int ) = async { let ep = IPEndPoint( host, 0 ) use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let outbuffer = packet.Bytes try let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep ) if result <= 0 then raise (SocketException()) let epr = ref (ep :> EndPoint) let inbuffer = Array.create (outbuffer.Length + 256) 0uy let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr ) if result <= 0 then raise (SocketException()) return inbuffer finally socket.Close() }

    Read the article

  • How to detect a timeout when using asynchronous Socket.BeginReceive?

    - by James Hugard
    Writing an asynchronous Ping using Raw Sockets in F#, to enable parallel requests using as few threads as possible. Not using "System.Net.NetworkInformation.Ping", because it appears to allocate one thread per request. Am also interested in using F# async workflows. The synchronous version below correctly times out when the target host does not exist/respond, but the asynchronous version hangs. Both work when the host does respond. Not sure if this is a .NET issue, or an F# one... Any ideas? (note: the process must run as Admin to allow Raw Socket access) This throws a timeout: let result = Ping.Ping ( IPAddress.Parse( "192.168.33.22" ), 1000 ) However, this hangs: let result = Ping.AsyncPing ( IPAddress.Parse( "192.168.33.22" ), 1000 ) |> Async.RunSynchronously Here's the code... module Ping open System open System.Net open System.Net.Sockets open System.Threading //---- ICMP Packet Classes type IcmpMessage (t : byte) = let mutable m_type = t let mutable m_code = 0uy let mutable m_checksum = 0us member this.Type with get() = m_type member this.Code with get() = m_code member this.Checksum = m_checksum abstract Bytes : byte array default this.Bytes with get() = [| m_type m_code byte(m_checksum) byte(m_checksum >>> 8) |] member this.GetChecksum() = let mutable sum = 0ul let bytes = this.Bytes let mutable i = 0 // Sum up uint16s while i < bytes.Length - 1 do sum <- sum + uint32(BitConverter.ToUInt16( bytes, i )) i <- i + 2 // Add in last byte, if an odd size buffer if i <> bytes.Length then sum <- sum + uint32(bytes.[i]) // Shuffle the bits sum <- (sum >>> 16) + (sum &&& 0xFFFFul) sum <- sum + (sum >>> 16) sum <- ~~~sum uint16(sum) member this.UpdateChecksum() = m_checksum <- this.GetChecksum() type InformationMessage (t : byte) = inherit IcmpMessage(t) let mutable m_identifier = 0us let mutable m_sequenceNumber = 0us member this.Identifier = m_identifier member this.SequenceNumber = m_sequenceNumber override this.Bytes with get() = Array.append (base.Bytes) [| byte(m_identifier) byte(m_identifier >>> 8) byte(m_sequenceNumber) byte(m_sequenceNumber >>> 8) |] type EchoMessage() = inherit InformationMessage( 8uy ) let mutable m_data = Array.create 32 32uy do base.UpdateChecksum() member this.Data with get() = m_data and set(d) = m_data <- d this.UpdateChecksum() override this.Bytes with get() = Array.append (base.Bytes) (this.Data) //---- Synchronous Ping let Ping (host : IPAddress, timeout : int ) = let mutable ep = new IPEndPoint( host, 0 ) let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let mutable buffer = packet.Bytes try if socket.SendTo( buffer, ep ) <= 0 then raise (SocketException()) buffer <- Array.create (buffer.Length + 20) 0uy let mutable epr = ep :> EndPoint if socket.ReceiveFrom( buffer, &epr ) <= 0 then raise (SocketException()) finally socket.Close() buffer //---- Entensions to the F# Async class to allow up to 5 paramters (not just 3) type Async with static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction) static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction) //---- Extensions to the Socket class to provide async SendTo and ReceiveFrom type System.Net.Sockets.Socket with member this.AsyncSendTo( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginSendTo, this.EndSendTo ) member this.AsyncReceiveFrom( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginReceiveFrom, (fun asyncResult -> this.EndReceiveFrom(asyncResult, remoteEP) ) ) //---- Asynchronous Ping let AsyncPing (host : IPAddress, timeout : int ) = async { let ep = IPEndPoint( host, 0 ) use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let outbuffer = packet.Bytes try let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep ) if result <= 0 then raise (SocketException()) let epr = ref (ep :> EndPoint) let inbuffer = Array.create (outbuffer.Length + 256) 0uy let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr ) if result <= 0 then raise (SocketException()) return inbuffer finally socket.Close() }

    Read the article

  • KeepAliveException when using HttpWebRequest.GetResponse

    - by Lucas
    I am trying to POST an attachment to CouchDB using the HttpWebRequest. However, when I attempt "response = (HttpWebResponse)httpWebRequest.GetResponse();" I receive a WebException with the message "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server." I have found some articles stating that setting the keepalive to false and httpversion to 1.0 resolves the situation. I am finding that it does not yeilding the exact same error, plus I do not want to take that approach as I do not want to use the 1.0 version due to how it handles the connection. Any suggestions or ideas are welcome. I'll try them all until one works! public ServerResponse PostAttachment(Server server, Database db, Attachment attachment) { Stream dataStream; HttpWebResponse response = null; StreamReader sr = null; byte[] buffer; string json; string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x"); string headerTemplate = "Content-Disposition: form-data; name=\"_attachments\"; filename=\"" + attachment.Filename + "\"\r\n Content-Type: application/octet-stream\r\n\r\n"; byte[] headerbytes = System.Text.Encoding.UTF8.GetBytes(headerTemplate); byte[] boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n"); HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create("http://" + server.Host + ":" + server.Port.ToString() + "/" + db.Name + "/" + attachment.Document.Id); httpWebRequest.ContentType = "multipart/form-data; boundary=" + boundary; httpWebRequest.Method = "POST"; httpWebRequest.KeepAlive = true; httpWebRequest.ContentLength = attachment.Stream.Length + headerbytes.Length + boundarybytes.Length; if (!string.IsNullOrEmpty(server.EncodedCredentials)) httpWebRequest.Headers.Add("Authorization", server.EncodedCredentials); if (!attachment.Stream.CanRead) throw new System.NotSupportedException("The stream cannot be read."); // Get the request stream try { dataStream = httpWebRequest.GetRequestStream(); } catch (Exception e) { throw new WebException("Failed to get the request stream.", e); } buffer = new byte[server.BufferSize]; int bytesRead; dataStream.Write(headerbytes,0,headerbytes.Length); attachment.Stream.Position = 0; while ((bytesRead = attachment.Stream.Read(buffer, 0, buffer.Length)) > 0) { dataStream.Write(buffer, 0, bytesRead); } dataStream.Write(boundarybytes, 0, boundarybytes.Length); dataStream.Close(); // send the request and get the response try { response = (HttpWebResponse)httpWebRequest.GetResponse(); } catch (Exception e) { throw new WebException("Invalid response received from server.", e); } // get the server's response json try { dataStream = response.GetResponseStream(); sr = new StreamReader(dataStream); json = sr.ReadToEnd(); } catch (Exception e) { throw new WebException("Failed to access the response stream.", e); } // close up all our streams and response sr.Close(); dataStream.Close(); response.Close(); // Deserialize the server response return ConvertTo.JsonToServerResponse(json); }

    Read the article

  • Apple push Notification Feedback service Not working

    - by Yassmeen
    Hi, I am developing an iPhone App that uses Apple Push Notifications. On the iPhone side everything is fine, on the server side I have a problem. Notifications are sent correctly however when I try to query the feedback service to obtain a list of devices from which the App has been uninstalled, I always get zero results. I know that I should obtain one result as the App has been uninstalled from one of my test devices. After 24 hours and more I still have no results from the feedback service.. Any ideas? Does anybody know how long it takes for the feedback service to recognize that my App has been uninstalled from my test device? Note: I have another push notification applications on the device so I know that my app is not the only app. The code - C#: public static string CheckFeedbackService(string certaName, string hostName) { SYLogger.Log("Check Feedback Service Started"); ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(ValidateServerCertificate); // Create a TCP socket connection to the Apple server on port 2196 TcpClient tcpClientF = null; SslStream sslStreamF = null; string result = string.Empty; //Contect to APNS& Add the Apple cert to our collection X509Certificate2Collection certs = new X509Certificate2Collection { GetServerCert(certaName) }; //Set up byte[] buffer = new byte[38]; int recd = 0; DateTime minTimestamp = DateTime.Now.AddYears(-1); // Create a TCP socket connection to the Apple server on port 2196 try { using (tcpClientF = new TcpClient(hostName, 2196)) { SYLogger.Log("Client Connected ::" + tcpClientF.Connected); // Create a new SSL stream over the connection sslStreamF = new SslStream(tcpClientF.GetStream(), true,ValidateServerCertificate); // Authenticate using the Apple cert sslStreamF.AuthenticateAsClient(hostName, certs, SslProtocols.Default, false); SYLogger.Log("Stream Readable ::" + sslStreamF.CanRead); SYLogger.Log("Host Name ::"+hostName); SYLogger.Log("Cert Name ::" + certs[0].FriendlyName); if (sslStreamF != null) { SYLogger.Log("Connection Started"); //Get the first feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); SYLogger.Log("Buffer length ::" + recd); //Continue while we have results and are not disposing while (recd > 0) { SYLogger.Log("Reading Started"); //Get our seconds since 1970 ? byte[] bSeconds = new byte[4]; byte[] bDeviceToken = new byte[32]; Array.Copy(buffer, 0, bSeconds, 0, 4); //Check endianness if (BitConverter.IsLittleEndian) Array.Reverse(bSeconds); int tSeconds = BitConverter.ToInt32(bSeconds, 0); //Add seconds since 1970 to that date, in UTC and then get it locally var Timestamp = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddSeconds(tSeconds).ToLocalTime(); //Now copy out the device token Array.Copy(buffer, 6, bDeviceToken, 0, 32); string deviceToken = BitConverter.ToString(bDeviceToken).Replace("-", "").ToLower().Trim(); //Make sure we have a good feedback tuple if (deviceToken.Length == 64 && Timestamp > minTimestamp) { SYLogger.Log("Feedback " + deviceToken); result = deviceToken; } //Clear array to reuse it Array.Clear(buffer, 0, buffer.Length); //Read the next feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); } SYLogger.Log("Reading Ended"); } } } catch (Exception e) { SYLogger.Log("Authentication failed - closing the connection::" + e); return "NOAUTH"; } finally { // The client stream will be closed with the sslStream // because we specified this behavior when creating the sslStream. if (sslStreamF != null) sslStreamF.Close(); if (tcpClientF != null) tcpClientF.Close(); //Clear array on error Array.Clear(buffer, 0, buffer.Length); } SYLogger.Log("Feedback ended "); return result; }

    Read the article

  • How to optimize this JSON/JQuery/Javascript function in IE7/IE8?

    - by melaos
    hi guys, i'm using this function to parse this json data but i find the function to be really slow in IE7 and slightly slow in IE8. basically the first listbox generate the main product list, and upon selection of the main list, it will populate the second list. this is my data: [{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":18094,"ProductName":"Sound Blaster Wireless Receiver","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16185,"ProductName":"Xdock Wireless","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16186,"ProductName":"Xmod Wireless","ProductServiceLifeId":1}] and these are the functions that i'm using: //Three Product Panes function function populateMainPane() { $.getJSON('/Home/ThreePaneProductData/', function(data) { products = data; alert(JSON.stringify(products)); var prodCategory = {}; for (i = 0; i < products.length; i++) { prodCategory[products[i].ProductCategoryId] = products[i].ProductCategoryName; } //end for //take only unique product category to be used var id = 0; for (id in prodCategory) { if (prodCategory.hasOwnProperty(id)) { $(".LBox1").append("<option value='" + id + "'>" + prodCategory[id] + "</option>"); //alert(prodCategory[id]); } } var url = document.location.href; var parms = url.substring(url.indexOf("?") + 1).split("&"); for (var i = 0; i < parms.length; i++) { var parm = parms[i].split("="); if (parm[0].toLowerCase() == "pid") { $(".PanelProductReg").show(); var nProductIds = parm[1].split(","); for (var k = 0; k < nProductIds.length; k++) { var nProductId = parseInt(nProductIds[k], 10); for (var j = 0; j < products.length; j++) { if (nProductId == parseInt(products[j].ProductId, 10)) { addProductRow(nProductId, products[j].ProductName); j = products.length; } } //end for } } } }); } //end function function populateSubCategoryPane() { var subCategory = {}; for (var i = 0; i < products.length; i++) { if (products[i].ProductCategoryId == $('.LBox1').val()) subCategory[products[i].ProductSubCategoryId] = products[i].ProductSubCategoryName; } //end for //clear off the list box first $(".LBox2").html(""); var id = 0; for (id in subCategory) { if (subCategory.hasOwnProperty(id)) { $(".LBox2").append("<option value='" + id + "'>" + subCategory[id] + "</option>"); //alert(prodCategory[id]); } } } //end function is there anything i can do to optimize this or is this a known browser issue?

    Read the article

  • C# need help debugging socks5-connection attemp

    - by Chuck
    Hi, I've written the following code to (successfully) connect to a socks5 proxy. I send a user/pw auth and get an OK reply (0x00), but as soon as I tell the proxy to connect to whichever ip:port, it gives me 0x01 (general error). Socket socket5_proxy = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint proxyEndPoint = new IPEndPoint(IPAddress.Parse("111.111.111.111"), 1080); // proxy ip, port. fake for posting purposes. socket5_proxy.Connect(proxyEndPoint); byte[] init_socks_command = new byte[4]; init_socks_command[0] = 0x05; init_socks_command[1] = 0x02; init_socks_command[2] = 0x00; init_socks_command[3] = 0x02; socket5_proxy.Send(init_socks_command); byte[] socket_response = new byte[2]; int bytes_recieved = socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] == 0x02) { byte[] temp_bytes; string socks5_user = "foo"; string socks5_pass = "bar"; byte[] auth_socks_command = new byte[3 + socks5_user.Length + socks5_pass.Length]; auth_socks_command[0] = 0x05; auth_socks_command[1] = Convert.ToByte(socks5_user.Length); temp_bytes = Encoding.Default.GetBytes(socks5_user); temp_bytes.CopyTo(auth_socks_command, 2); auth_socks_command[2 + socks5_user.Length] = Convert.ToByte(socks5_pass.Length); temp_bytes = Encoding.Default.GetBytes(socks5_pass); temp_bytes.CopyTo(auth_socks_command, 3 + socks5_user.Length); socket5_proxy.Send(auth_socks_command); socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] != 0x00) return; byte[] connect_socks_command = new byte[10]; connect_socks_command[0] = 0x05; connect_socks_command[1] = 0x02; // streaming connect_socks_command[2] = 0x00; connect_socks_command[3] = 0x01; // ipv4 temp_bytes = IPAddress.Parse("222.222.222.222").GetAddressBytes(); // target connection. fake ip, obviously temp_bytes.CopyTo(connect_socks_command, 4); byte[] portBytes = BitConverter.GetBytes(8888); connect_socks_command[8] = portBytes[0]; connect_socks_command[9] = portBytes[1]; socket5_proxy.Send(connect_socks_command); socket5_proxy.Receive(socket_response); if (socket_response[1] != 0x00) MessageBox.Show("Damn it"); // I always end here, 0x01 I've used this as a reference: http://en.wikipedia.org/wiki/SOCKS#SOCKS_5 Have I completely misunderstood something here? How I see it, I can connect to the socks5 fine. I can authenticate fine. But I/the proxy can't "do" anything? Yes, I know the proxy works. Yes, the target ip is available and yes the target port is open/responsive. I get 0x01 no matter what I try to connect to. Any help is VERY MUCH appreciated! Thanks, Chuck

    Read the article

  • Why does BeginReceiveFrom never time out?

    - by James Hugard
    I am writing an asynchronous Ping using Raw Sockets in F#, to enable parallel requests using as few threads as possible ("System.Net.NetworkInformation.Ping" appears to use one thread per request, but have not tested this... also am interested in using F# async workflows). The synchronous version below correctly times out when the target host does not exist/respond, but the asynchronous version hangs. Both work when the host does respond... Any ideas? (note: the process must run as Admin for this code to work) This throws a timeout: let result = Ping.Ping ( IPAddress.Parse( "192.168.33.22" ), 1000 ) However, this hangs: let result = Ping.PingAsync ( IPAddress.Parse( "192.168.33.22" ), 1000 ) |> Async.RunSynchronously Here's the code... module Ping open System open System.Net open System.Net.Sockets open System.Threading //---- ICMP Packet Classes type IcmpMessage (t : byte) = let mutable m_type = t let mutable m_code = 0uy let mutable m_checksum = 0us member this.Type with get() = m_type member this.Code with get() = m_code member this.Checksum = m_checksum abstract Bytes : byte array default this.Bytes with get() = [| m_type m_code byte(m_checksum) byte(m_checksum >>> 8) |] member this.GetChecksum() = let mutable sum = 0ul let bytes = this.Bytes let mutable i = 0 // Sum up uint16s while i < bytes.Length - 1 do sum <- sum + uint32(BitConverter.ToUInt16( bytes, i )) i <- i + 2 // Add in last byte, if an odd size buffer if i <> bytes.Length then sum <- sum + uint32(bytes.[i]) // Shuffle the bits sum <- (sum >>> 16) + (sum &&& 0xFFFFul) sum <- sum + (sum >>> 16) sum <- ~~~sum uint16(sum) member this.UpdateChecksum() = m_checksum <- this.GetChecksum() type InformationMessage (t : byte) = inherit IcmpMessage(t) let mutable m_identifier = 0us let mutable m_sequenceNumber = 0us member this.Identifier = m_identifier member this.SequenceNumber = m_sequenceNumber override this.Bytes with get() = Array.append (base.Bytes) [| byte(m_identifier) byte(m_identifier >>> 8) byte(m_sequenceNumber) byte(m_sequenceNumber >>> 8) |] type EchoMessage() = inherit InformationMessage( 8uy ) let mutable m_data = Array.create 32 32uy do base.UpdateChecksum() member this.Data with get() = m_data and set(d) = m_data <- d this.UpdateChecksum() override this.Bytes with get() = Array.append (base.Bytes) (this.Data) //---- Synchronous Ping let Ping (host : IPAddress, timeout : int ) = let mutable ep = new IPEndPoint( host, 0 ) let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let mutable buffer = packet.Bytes try if socket.SendTo( buffer, ep ) <= 0 then raise (SocketException()) buffer <- Array.create (buffer.Length + 20) 0uy let mutable epr = ep :> EndPoint if socket.ReceiveFrom( buffer, &epr ) <= 0 then raise (SocketException()) finally socket.Close() buffer //---- Entensions to the F# Async class to allow up to 5 paramters (not just 3) type Async with static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction) static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction) //---- Extensions to the Socket class to provide async SendTo and ReceiveFrom type System.Net.Sockets.Socket with member this.AsyncSendTo( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginSendTo, this.EndSendTo ) member this.AsyncReceiveFrom( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginReceiveFrom, (fun asyncResult -> this.EndReceiveFrom(asyncResult, remoteEP) ) ) //---- Asynchronous Ping let PingAsync (host : IPAddress, timeout : int ) = async { let ep = IPEndPoint( host, 0 ) use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let outbuffer = packet.Bytes try let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep ) if result <= 0 then raise (SocketException()) let epr = ref (ep :> EndPoint) let inbuffer = Array.create (outbuffer.Length + 256) 0uy let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr ) if result <= 0 then raise (SocketException()) return inbuffer finally socket.Close() }

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • Compress file to bytes for uploading to SQL Server

    - by Chris
    I am trying to zip files to an SQL Server database table. I can't ensure that the user of the tool has write priveledges on the source file folder so I want to load the file into memory, compress it to an array of bytes and insert it into my database. This below does not work. class ZipFileToSql { public event MessageHandler Message; protected virtual void OnMessage(string msg) { if (Message != null) { MessageHandlerEventArgs args = new MessageHandlerEventArgs(); args.Message = msg; Message(this, args); } } private int sourceFileId; private SqlConnection Conn; private string PathToFile; private bool isExecuting; public bool IsExecuting { get { return isExecuting; } } public int SourceFileId { get { return sourceFileId; } } public ZipFileToSql(string pathToFile, SqlConnection conn) { isExecuting = false; PathToFile = pathToFile; Conn = conn; } public void Execute() { isExecuting = true; byte[] data; byte[] cmpData; //create temp zip file OnMessage("Reading file to memory"); FileStream fs = File.OpenRead(PathToFile); data = new byte[fs.Length]; ReadWholeArray(fs, data); OnMessage("Zipping file to memory"); MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(data, 0, data.Length); cmpData = new byte[ms.Length]; ReadWholeArray(ms, cmpData); OnMessage("Saving file to database"); using (SqlCommand cmd = Conn.CreateCommand()) { cmd.CommandText = @"MergeFileUploads"; cmd.CommandType = CommandType.StoredProcedure; //cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = data; cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = cmpData; SqlParameter p = new SqlParameter(); p.ParameterName = "@SourceFileId"; p.Direction = ParameterDirection.Output; p.SqlDbType = SqlDbType.Int; cmd.Parameters.Add(p); cmd.ExecuteNonQuery(); sourceFileId = (int)p.Value; } OnMessage("File Saved"); isExecuting = false; } private void ReadWholeArray(Stream stream, byte[] data) { int offset = 0; int remaining = data.Length; float Step = data.Length / 100; float NextStep = data.Length - Step; while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; if (remaining < NextStep) { NextStep -= Step; } } } }

    Read the article

  • 3-way quicksort, question

    - by peiska
    I am trying to understand the 3-way radix Quicksort, and i dont understand why the the CUTOFF variable there? and the insertion method? public class Quick3string { private static final int CUTOFF = 15; // cutoff to insertion sort // sort the array a[] of strings public static void sort(String[] a) { // StdRandom.shuffle(a); sort(a, 0, a.length-1, 0); assert isSorted(a); } // return the dth character of s, -1 if d = length of s private static int charAt(String s, int d) { assert d >= 0 && d <= s.length(); if (d == s.length()) return -1; return s.charAt(d); } // 3-way string quicksort a[lo..hi] starting at dth character private static void sort(String[] a, int lo, int hi, int d) { // cutoff to insertion sort for small subarrays if (hi <= lo + CUTOFF) { insertion(a, lo, hi, d); return; } int lt = lo, gt = hi; int v = charAt(a[lo], d); int i = lo + 1; while (i <= gt) { int t = charAt(a[i], d); if (t < v) exch(a, lt++, i++); else if (t > v) exch(a, i, gt--); else i++; } // a[lo..lt-1] < v = a[lt..gt] < a[gt+1..hi]. sort(a, lo, lt-1, d); if (v >= 0) sort(a, lt, gt, d+1); sort(a, gt+1, hi, d); } // sort from a[lo] to a[hi], starting at the dth character private static void insertion(String[] a, int lo, int hi, int d) { for (int i = lo; i <= hi; i++) for (int j = i; j > lo && less(a[j], a[j-1], d); j--) exch(a, j, j-1); } // exchange a[i] and a[j] private static void exch(String[] a, int i, int j) { String temp = a[i]; a[i] = a[j]; a[j] = temp; } // is v less than w, starting at character d private static boolean less(String v, String w, int d) { assert v.substring(0, d).equals(w.substring(0, d)); return v.substring(d).compareTo(w.substring(d)) < 0; } // is the array sorted private static boolean isSorted(String[] a) { for (int i = 1; i < a.length; i++) if (a[i].compareTo(a[i-1]) < 0) return false; return true; } public static void main(String[] args) { // read in the strings from standard input String[] a = StdIn.readAll().split("\\s+"); int N = a.length; // sort the strings sort(a); // print the results for (int i = 0; i < N; i++) StdOut.println(a[i]); } } from http://www.cs.princeton.edu/algs4/51radix/Quick3string.java.html

    Read the article

  • RSA encryption/ Decryption in a client server application

    - by user308806
    Hi guys, probably missing something very straight forward on this, but please forgive me, I'm very naive! Have a client server application where the client identifies its self with an RSA encrypted username & password. Unfortunately I'm getting a "bad padding exception: data must start with zero" when i try to decrypt with the public key on the client side. I'm fairly sure the key is correct as I have tested encrypting with public key then decrypting with private key on the client side with no problems at all. Just seems when I transfer it over the connection it messses it up somehow?! Using PrintWriter & BufferedReader on the sockets if thats of importance. EncodeBASE64 & DecodeBASE64 encode byte[] to 64base and vice versa respectively. Any ideas guys?? Client side: Socket connectionToServer = new Socket("127.0.0.1", 7050); InputStream in = connectionToServer.getInputStream(); DataInputStream dis = new DataInputStream(in); int length = dis.readInt(); byte[] data = new byte[length]; // dis.readFully(data); dis.read(data); System.out.println("The received Data*****************************************"); System.out.println("The length of bits "+ length); System.out.println(data); System.out.println("***********************************************************"); Decryption d = new Decryption(); byte [] ttt = d.decrypt(data); System.out.print(data); String ss = new String(ttt); System.out.println("***********************"); System.out.println(ss); System.out.println("************************"); Server Side: in = connectionFromClient.getInputStream(); OutputStream out = connectionFromClient.getOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); LicenseList licenses = new LicenseList(); String ValidIDs = licenses.getAllIDs(); System.out.println(ValidIDs); Encryption enc = new Encryption(); byte[] encrypted = enc.encrypt(ValidIDs); byte[] dd = enc.encrypt(ValidIDs); String tobesent = new String(dd); //byte[] rsult = enc.decrypt(dd); //String tt = String(rsult); System.out.println("The sent data**********************************************"); System.out.println(dd); String temp = new String(dd); System.out.println(temp); System.out.println("*************************************************************"); //BufferedWriter bf = new BufferedWriter(OutputStreamWriter(out)); //dataOut.write(ValidIDs.getBytes().length); dataOut.writeInt(ValidIDs.getBytes().length); dataOut.flush(); dataOut.write(encrypted); dataOut.flush(); System.out.println("********Testing**************"); System.out.println("Here are the ids:::"); System.out.println(licenses.getAllIDs()); System.out.println("**********************"); //bw.write("it is working well\n");

    Read the article

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >