Search Results

Search found 25447 results on 1018 pages for 'chester is back'.

Page 102/1018 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Lost Windows 7 boot after EasyBCD with EFI

    - by drent
    I've got a Lenovo Y580 with a 64GB SSD and a 1TB HDD setup using GPT and setup to boot from (U?)EFI. I was trying to get my Linux Mint installation on the Windows boot manager using EasyBCD (I didn't realise EFI but it wiped my boot partition/loader and I cannot seem to get Windows back (and I still can't get a bootable Linux Mint). Using the System Recovery utility, Startup Repair can't "see" windows (it might be because I'm using a 7 Pro disk to recover Home Premium?). In command prompt, Bootrec tools don't do anything and bootsect can't run because it says that it's for BIOS only and I've booted with EFI. I can see the EFI data on the 200mb SSD partition using diskpart but I don't know how to add Windows back onto whatever bootloader I have/need. At the moment the only options I can see are: Do a fresh install of Windows and hope that the setup remains as fast as the default one (the SSD is some kind of cache for Windows but I can't quite see how it works given that the rest of the SSD is unpartitioned space). This seems like overkill given that Windows was working fine til EasyBCD deleted it. Try forcing BIOS mode and see if that somehow magically fixes things Try converting from GPT to MBR to try and use the bootrec/bootsect tools (and maybe back again) which seems like a really bad idea. Anyone have any ideas?

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • Did Adobe Photoshop just killed my Graphics Card for good?

    - by user6004
    I was working with Adobe Photoshop, just some regular work, when I came to edit a PSD file and change the text of some layer, when all of a sudden the PC froze. No mouse, screen is frozen, keyboard strokes aren't getting me anything, no Task Manager, nada. So I rebooted my PC, and then something quite terrifying appeared before my eyes. It was not the Checkdisk utility that was launched, that made me terrified (by the way, that reboot damaged the partition table of an external HDD that was connected at the time to my PC, but that's another story). It was the screen itself. Please have a look. So after Checkdisk finished and Windows loaded, I noticed that the resolution was not right. Instead of 1440x900 which I had set, it was 1280x1024. When I went to change it back, I had no option to change back to my old resolution, and has only 3 other general resolution properties, as if my Video Card (GeForce 8800 GTS btw) was not recognized. And what do you know, in the Device Manager it appeared with an exclamation mark. Inside the hardware, it said this: Windows has stopped this device because it has reported problems. (Code 43) Uninstalling the drivers, downloading the newest drivers from NVIDIA and installing them did not work. It always comes back to this. So, do you have any advice before I go out and buy a new graphics card? I thought this was the only option left, but maybe the experts at Super User can help me out. By the way, the dotted screen appears after every reboot, and I see the dots when the ASUS Motherboard screen shows up at boot. Thanks in advance.

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • Debian - Problems Unmounting External Hard Drives

    - by user331981
    I recently installed Debian Testing on a new laptop and I just noticed that I am having some issues with unmounting external hard drives. I am using Mate Desktop 1.8.1. With the 1st drive, if I right click on the drive and select “safely remove”: The drive unmounts, spins down, immediately spins back up an remounts. Unable to unmount. With the 2nd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down. With the 3rd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down, immediately spins back up but does not remount, and after 20 seconds, it spins down and stays that way. Behavior is the same on both USB 2.0 and USB 3.0 ports. On my last laptop, on which I also used Debian Testing + Mate desktop, the safe removal of drives worked out of the box and I never had an issue with it. The drives would unmount, spin down and stay that way. To remount the drive, one needed to unplug the device and plug it back in. I am unsure how to troubleshoot this issue and I am not sure if it is merely a matter of installing a “missing” package of editing a config file. Thank you in advance.

    Read the article

  • AT&T Upload Filtering?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the filenames from upper to lower case and they upload fine. If I change back to the original filename, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Internet keeps connecting and disconnecting

    - by OmerPT
    I have a PC running 32bit Windows 7 Professional. My wireless internet card is Edimax EW-7128G, it is inside my computer case and it is connected to an antenna. My problem is: I have a wireless connection to my router, the signal is fine and it's not slow or anything, but it just keeps disconnecting every 30 seconds or so, then connects back, then it disconnects, then connects back and so on. This is new - it only started about a week ago, everything worked just fine until then. I know it's not a problem with the router because: Other computers can connect to it easily and they work just fine, no disconnections (My phone too). I know it's not a problem with the computer or the wireless card itself because: I have Ubuntu installed side-by-side with my Windows, and when I switch to Ubuntu the internet works great. This caused me to think it's probably a problem with the wireless card drivers installed on windows (Ubuntu auto-installed them, I installed them manually on windows): I tried uninstalling them, and then installing them back again, sadly that didn't help. I'm kind of lost here - can anyone think of anything else I should try? Thanks :)

    Read the article

  • Picasa duplicates everything into a `$My Pictures` folder.

    - by Barend
    My parents have a Windows XP laptop running Picasa, Dutch version of both. It's configured to put imported photos in C:\Documents and Settings\user\Mijn documenten\Mijn afbeeldingen, which is Windows XP's default My Pictures folder localized to Dutch. A while back I noticed disk space was going much faster than it should be. Turns out there was a Documents and Settings\user\Mijn documenten\$My Pictures pretty much the same size as the original one. This was well over a year ago. I figured it to be an internationalization bug, the $My Pictures thing looks like a placeholder that didn't get resolved. I figured it'd get fixed soon. It didn't. I threw out Picasa, tediously cleaned up the mess it left behind and replaced it by Windows Live Photo Gallery. My parents found WLPG to be unworkable and asked to get Picasa back. I reinstalled it, by now it's at 3.8.0 (build 117.29, 0). Wouldn't you have it, the mystery $My Pictures folder is back, 25GB of disk space has evaporated and it's the same mess it used to be. What's going on here? How do I stop it doing this?

    Read the article

  • Mirth: Transforming a response message in a separate Channel and returning it to original channel

    - by Ryan H
    I have a channel that takes HL7v2 message and converts it to HL7v3. It invokes a SOAP web service and receives a response in HL7v3. I need to convert that response back to HL7v2. Currently: I "Send Response to:" my second channel. That can convert it fine back to HL7v2, but it doesn't seem to return a response message. I want that second transformation to be the response to the initiator which is an LLP Listener.

    Read the article

  • RTSP client in android

    - by Vinay
    I am writing a RTSP client in Android. I am able to receive the Responses for all the requests i.e., DESCRIBE it sends back the 200 OK SETUP with transport: RTP/AVP:unicast:client_port=4568:4569 got the 200 OK Message back Sent PLAY, and got the OK Message After that how to get the audio and video frames? I have searched on blogs, but all say to listen at client_port but I am not receiving any packets. Please let me know am I doing correctly.

    Read the article

  • C# remote web request certificate error

    - by Ben
    Hi. I am currently integrating a payment gateway into an application. Following a successful transaction the remote payment gateway posts details of the transaction back to my application (ITN). It posts to a HttpHandler that is used to read and validate the data. Part of the validation performed is a POST made by the handler to a validation service provided by the payment gateway. This effectively posts some of the original form values received back to the payment gateway to ensure they are valid. The url that I am posting back to is: "https://sandbox.payfast.co.za/eng/query/validate" and the code I am using: /// <summary> /// Posts the data back to the payment processor to validate the data received /// </summary> public static bool ValidateITNRequestData(NameValueCollection formVariables) { bool isValid = true; try { using (WebClient client = new WebClient()) { string validateUrl = (UseSandBox) ? SandboxValidateUrl : LiveValidateUrl; byte[] responseArray = client.UploadValues(validateUrl, "POST", formVariables); // get the resposne and replace the line breaks with spaces string result = Encoding.ASCII.GetString(responseArray); result = result.Replace("\r\n", " ").Replace("\r", "").Replace("\n", " "); if (result == null || !result.StartsWith("VALID")) { isValid = false; LogManager.InsertLog(LogTypeEnum.OrderError, "PayFast ITN validation failed", "The validation response was not valid."); } } } catch (Exception ex) { LogManager.InsertLog(LogTypeEnum.Unknown, "Unable to validate ITN data. Unknown exception", ex); isValid = false; } return isValid; } However, on calling WebClient.UploadValues the following exception is raised: System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception) at For the sake of brevity I haven't listed the full call stack (I can do if anyone thinks it will help). The remote certificate does appear to be valid. To get around the problem I did try adding a new RemoteCertificateValidationCallback that always returned true but just ended up getting the following exception: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_ServerCertificateValidationCallback(RemoteCertificateValidationCallback value) at NopSolutions.NopCommerce.Payment.Methods.PayFast.PayFastPaymentProcessor.ValidateITNRequestData(NameValueCollection formVariables) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer So I am not sure this will work in medium trust? Any help would be much appreciated. Thanks Ben

    Read the article

  • jQuery AutoComplete select firing after change?

    - by Zarigani
    I'm using the jQuery UI AutoComplete control (just updated to jQuery UI 1.8.1). Whenever the user leaves the text box, I want to set the contents of the text box to a known-good value and set a hidden ID field for the value that was selected. Additionally, I want the page to post back when the contents of the text box are changed. Currently, I am implementing this by having the autocomplete select event set the hidden id and then a change event on the text box which sets the textbox value and, if necessary, causes a post back. If the user just uses the keyboard, this works perfectly. You can type, use the up and down arrows to select a value and then tab to exit. The select event fires, the id is set and then the change event fires and the page posts back. If the user starts typing and then uses the mouse to pick from the autocomplete options though, the change event fires (as focus shifts to the autocomplete menu?) and the page posts back before the select event has a chance to set the ID. Is there a way to get the change event to not fire until after the select event, even when a mouse is used? $(function() { var txtAutoComp_cache = {}; var txtAutoComp_current = { label: $('#txtAutoComp').val(), id: $('#hiddenAutoComp_ID').val() }; $('#txtAutoComp').change(function() { if (this.value == '') { txtAutoComp_current = null; } if (txtAutoComp_current) { this.value = txtAutoComp_current.label ? txtAutoComp_current.label : txtAutoComp_current; $('#hiddenAutoComp_ID').val(txtAutoComp_current.id ? txtAutoComp_current.id : txtAutoComp_current); } else { this.value = ''; $('#hiddenAutoComp_ID').val(''); } // Postback goes here }); $('#txtAutoComp').autocomplete({ source: function(request, response) { var jsonReq = '{ "prefixText": "' + request.term.replace('\\', '\\\\').replace('"', '\\"') + '", "count": 0 }'; if (txtAutoComp_cache.req == jsonReq && txtAutoComp_cache.content) { response(txtAutoComp_cache.content); return; } $.ajax({ url: 'ajaxLookup.asmx/CatLookup', contentType: 'application/json; charset=utf-8', dataType: 'json', data: jsonReq, type: 'POST', success: function(data) { txtAutoComp_cache.req = jsonReq; txtAutoComp_cache.content = data.d; response(data.d); if (data.d && data.d[0]) { txtAutoComp_current = data.d[0]; } } }); }, select: function(event, ui) { if (ui.item) { txtAutoComp_current = ui.item; } $('#hiddenAutoComp_ID').val(ui.item ? ui.item.id : ''); } }); });

    Read the article

  • Android Resume Activity

    - by George
    In my app, I start an activity and then on button click, display an http url (using Intent - VIEW_ACTION) So when in the middle of the activity, if the user clicks the button called "Google", it opens up google.com in the browser. When I hit the back button, it comes back to my original activity screen. How can I get my activity to resume from where it left of? Thanks George

    Read the article

  • Audio recording and playback in Silverlight

    - by Ramesh
    I have a Silverlight 4 application that records user's voice through the mic. Now, as soon as the recording is completed, I need to play the recorded voice back to the user before posting it to the server. Is it at all possible to play it back to the user without getting into format conversions etc? Any ideas are welcome. Thanks!

    Read the article

  • Jquery history, inspect the history stack

    - by nickmorss
    Im using this history jquery plugin http://www.mikage.to/jquery/jquery_history.html and im trying to inspect the back stack. I can call this $.historyCurrentHash to return the current hash, but im trying to figure out how to look one step back in the stack. if i try calling this $.historyBackStack, but i just get an 'undefined'. anyone got any ideas? I can see that its not a public variable, but im wondering if i need to modify the library or just call it in a different way

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >