Search Results

Search found 14131 results on 566 pages for 'of note'.

Page 471/566 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Unable to mount XP share using fs-cifs from Linux

    - by MetalSearGolid
    I have a head unit that runs Linux that is connected to my PC via an Ethernet cable. I have a Windows XP share on this PC that the head unit needs to be able to mount, however, when mounting using the following command, it fails. Here is the command that fails, along with the verbose output: # fs-cifs -vvvvvvvvv -l //CUMBRIA-XP:192.168.1.2:/hnet /mnt/net cifs[2158679-1]: starting... cifs[2158679-1]: user is to input both name & passwd. cifs[2158679-1]: server [192.168.1.2] share [hnet] prefix [/mnt/net] user [nu ll] passwd [null] Welcome: 192.168.1.2(:/hnet) -> /mnt/net Username:headunit cifs[2158679-1]: user name: headunit length 8 cifs[2158679-1]: new server Password: cifs[2158679-1]: establishing connection to (192.168.1.2)CUMBRIA-XP cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 6137bfa2 f2d7803b cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: new packet cifs[2158679-1]: returning: mid 0 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/hnet cifs[2158679-1]: returning: mid 1 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: try upper case share. cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 2d3e910f e3e148c4 cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: returning: mid 2 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/HNET cifs[2158679-1]: returning: mid 3 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: mount failed. cifs[2158679-1]: io_mount: smb_connection failed: Bad file descriptor io_mount: Bad file descriptor cifs[2158679-1]: user is to input both name & passwd. fs-cifs: missing arguments, or all mount attempts failed. run "use fs-cifs" or "fs-cifs -h" for help. Any ideas? It is worthy to note that /mnt does not exist on the filesystem, but I was told by the company who gave us these units that fs-cifs should automatically create the /mnt/net folders if they don't exist.

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • How to make Firefox use TCP for DNS

    - by miniBill
    I want to use TCP for DNS, to bypass my ISP's slow and broken DNS servers. I'm not using (and don't want to use) a proxy. Note: I want to use DNS over TCP because if I use it over udp, no matter what server I set, I get answers from my ISP's DNS. Notice that I will fiercely downvote whoever suggests: programs to do TCP over DNS, the setting in about:config to make DNS go over the proxy too: I'm not using a proxy, use another DNS: I've already set up Google as my DNS, but I get intercepted. Example of what I mean by saying intercept: $ dig @8.8.8.8 thepiratebay.se ; <<>> DiG 9.8.1 <<>> @8.8.8.8 thepiratebay.se ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24385 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;thepiratebay.se. IN A ;; ANSWER SECTION: thepiratebay.se. 28800 IN A 83.224.65.41 ;; Query time: 50 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Sun Sep 16 22:51:06 2012 ;; MSG SIZE rcvd: 49 $ dig +tcp @8.8.8.8 thepiratebay.se ; <<>> DiG 9.8.1 <<>> +tcp @8.8.8.8 thepiratebay.se ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15131 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;thepiratebay.se. IN A ;; ANSWER SECTION: thepiratebay.se. 436 IN A 194.71.107.15 ;; Query time: 61 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Sun Sep 16 22:51:10 2012 ;; MSG SIZE rcvd: 49 If it matters, I'm using Firefox 14 on Gentoo Linux.

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Laptop LCD sometimes stops working on reboot. Please help.

    - by J Ringle
    I have a Gateway P-6831FX Laptop with Vista Ultimate. The Laptop LCD will sometimes not come on after I reboot the computer. I don't even close the lid and it happens. It isn't dim, it doesn't come on at all. No posting of CMOS (BIOS), nothing. Please note... this happens sometimes, not every time. Frustrating! When plugged into an external monitor, which works fine, Vista display properties can't even "sense" the laptop LCD. I try to enable the laptop LCD for dual display, turning on the laptop LCD, and it does nothing. It's like the laptop LCD is not even there. Manually taking a magnet in my hand to the laptop lid sensing switch (the sensor that turns off display/sleep mode when you close lid), sometimes causes the LCD backlight to "turn on" but not display any images. By "turn on" I mean I can see the screen backlight turn on to a 'dark gray' screen instead of pitch black. Subsequent reboot the laptop display is not working again! Here are the facts: Only happens at random and only after a reboot. Waking from Sleep mode isn't a problem. Pressing F4 function key for dual display does nothing when this happens. Closing lid doesn't seem to be related. (unless it is only after reboot.) using external magnet from laptop screen sensor sometimes triggers backlight to turn on but reboot back to square one with no LCD display. an external display always works fine. I have taken apart LCD, checked all wires and ribbons for loose connections or damage. I have replaced the Inverter. It doesn't seem to be heat related as I can put in sleep mode and resume fine when very hot. (external monitor works fine too). Sometimes the screen works fine as if there is not a problem at all. Even after a reboot... This is random. Any ideas out there? If it is a bad part... which one? The LCD seems to be fine. What are the odds of 2 bad inverters? The backlight is fine. The LCD wires/ribbons seem to be fine. I am at a loss. No warranty left and Gateway tech support is clueless. Thanks for any feedback that might help.

    Read the article

  • Benchmarking hosting providers IO with Bonnie

    - by Derek Organ
    Ok, because of a bunch of projects I'm working on I've access to dedicated Servers on a 3 hosting providers. As an experiment and for educational purposes I decided to see if I could benchmark how good the IO is with each. Bit of research lead me to Bonnie++ So I installed it on the server and ran this simple command /usr/sbin/bonnie -d /tmp/foo The 3 machines in different hosting providers are all dedicated machines, one is a VPS, other two are on some cloud platform e.g. VMWare / Xen using some kind of clustered SAN for storage This might be a naive thing to do but here are the results I found. HOST A Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxxxxx 1G 45081 88 56244 14 19167 4 20965 40 67110 6 67.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 15264 28 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ xxxxxxxx,1G,45081,88,56244,14,19167,4,20965,40,67110,6,67.2,0,16,15264,28,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ HOST B Version 1.03d ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxx 4G 43070 91 64510 15 19092 0 29276 47 39169 0 448.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24799 52 +++++ +++ +++++ +++ 25443 54 +++++ +++ +++++ +++ xxxxxxx,4G,43070,91,64510,15,19092,0,29276,47,39169,0,448.2,0,16,24799,52,+++++,+++,+++++,+++,25443,54,+++++,+++,+++++,+++ HOST C Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxx 1536M 15598 22 85698 13 258969 20 16194 22 723655 21 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14142 22 +++++ +++ 18621 22 13544 22 +++++ +++ 17363 21 xxxxxxxx,1536M,15598,22,85698,13,258969,20,16194,22,723655,21,+++++,+++,16,14142,22,+++++,+++,18621,22,13544,22,+++++,+++,17363,21 Ok, so first off what is the best way to read the figures and are there any issues with really comparing these numbers? Is this in any way a true representation of IO Speed? If not is there any way for me to test that? Note: these 3 machines are using either Ubuntu or Debian (I presume that doesn't really matter)

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Windows 7 + Deep Freeze - I'm stuck in an endless reboot loop

    - by myermian
    I have the following setup: Windows 7 Ultimate Deep Freeze I "thawed" my machine last night and performed a Windows Update. The update is having issues (it gets stuck at 32%, fails, and restarts my machine). When it reboots it attempts it again, and again, and again, etc. (Endless loop). I looked online and found some solutions, but none of them seem to be working: When I run Safe Mode, Safe Mode w/ Network, or Safe Mode w/ Command Prompt it attempts to revert the Windows Update changes. However, the problem is with Deep Freeze on (and now in "Frozen" mode) the reverted changes don't stay, and I'm back into the loop of death. Oh, and side note: "Safe Mode w/ Command Prompt" does not actually take me to a command prompt window? Perhaps because it is attempting to complete the Windows Update changes first? I have tried to select the option to NOT restart when an windows error occurs, but it still does. I tried the remainder of all the other options in the F8 screen. The only other option left is to find my Windows 7 Media Disc (I can't find it right now) and use it to repair windows (because for some reason the repair option does not show up in the F8 screen). Is there a way to disable Deep Freeze from loading? When I selected "Safe Mode w/ Command Prompt" I noticed that it loads the DpFrz.sys file. I know that when I'm in the Windows Boot Manager if I press F10 instead of F8 (while highlighting Windows 7) it takes me to an "Edit Boot Options" screen: Edit Windows boot options for: Windows 7 Path: \Windows\system32\winload.exe Partition: 2 Hard Disk: 8e90e329 [ /NOEXECUTE=OPTIN (I CAN EDIT THIS LINE) ] Update: I found my Windows 7 Media Disk and it did not help out. The laptop had the "System Restore" as a partition on the HDD. I later received (in the mail) a Windows 7 Upgrade Disc from Sony to upgrade my system from Windows Vista to Windows 7 Ultimate. I placed the disc into the DVD drive and it does not come up as a "bootable" disc. I'm going to try to find an alternative disc to see if I can get into Command Prompt. Update 2: I got a Windows Repair disc and got into a command prompt window. I got into the registry and disabled Deep Freeze. Also: I renamed the Pending.xml file to Pending.old I cleared out the Windows Temp directory I still am stuck in the loop (though, it isn't an issue with DeepFreeze anymore because I can make changes to the hard drive and they persist). Not sure what to do at this point? Update 3: I ran the repair option and it couldn't repair, but it did point me to something. It says the error was due to a driver that was failing. I have a feeling it is my UPEK Fingerprint scanner.

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Client A can ping server S, but client B cannot

    - by Soundar Rajan
    I moved the question to here from stackoverflow.com http://stackoverflow.com/questions/2917569/unable-to-ping-server-from-client-b-but-able-to-ping-from-client-a-please-help I am trying to configure a IIS 6.0/Windows Server 2003 web server with a ASP.net application. When I try to ping the server from client computer A I get the following: PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping succeeds! From client computer B, BOTH the pings fail. PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping fails with a message "Ping request could not find host 74.208.192.xxx:80" Both clients A and B are on the same subnet. The server is outside (a virtual server hosted by an ISP) I have an ASP.NET application in a virtual directory on the server. In IE or firefox, if I enter http://74.208.192.xxx/subdir/subdir/../Default.aspx, it works from both the clients! The server has default firewall settings but web server enabled (Port 80 is open). From client A (note the different 'reply to' address when the ping succeeds. C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx Pinging 74.208.192.xx with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.xx: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx:80 Pinging 74.208.192.xx:80 [208.67.216.xxx] with 32 bytes of data: Reply from 208.67.216.xxx: bytes=32 time=35ms TTL=54 ... Reply from 208.67.216.xxx: bytes=32 time=33ms TTL=54 Ping statistics for 208.67.216.xxx: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 32ms, Maximum = 54ms, Average = 38ms From client B C:\Documents and Settings\user>ping 74.208.192.81 Pinging 74.208.192.81 with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.81: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Documents and Settings\user>ping 74.208.192.81:80 Ping request could not find host 74.208.192.81:80. Please check the name and try again. My main problem is I have a web service (asmx) file and the web service client program is not able to access it from client B, but able to access it from client A. I am trying to find out why and thought this ping issue may shed some light. I can ping yahoo.com both the computers.

    Read the article

  • Supervisord appears to be running, but monitored programs aren't launched

    - by Brad Montgomery
    I've got supervisord 3.0a8 installed from the system package on ubuntu 10.04 (64bit). The supervisor service appears to be running, but it's not launching the configured programs. Interestingly enough, this exact configuration is running on another system, and is working as expected. The main config file looks like this: ; /etc/supervisor/supervisord.conf [unix_http_server] chmod=0700 file=/var/run/supervisor.sock [supervisord] logfile=/var/log/supervisor/supervisord.log childlogdir=/var/log/supervisor pidfile=/var/run/supervisord.pid [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///var/run/supervisor.sock [include] files = /etc/supervisor/conf.d/*.conf A sample program config looks like this: ; /etc/supervisor/conf.d/sample.conf [program:sample] directory=/opt/sample command=/opt/sample/run.sh Where, the /opt/sample/run.sh is: #!/bin/bash while true; do T=`date` echo "[$T] Running!" >> /var/log/sample.log sleep 1 done And, here's some additional information regarding the running instance of supervisord: root@myhost:~# supervisorctl version 3.0a8 root@myhost:~# which supervisorctl /usr/bin/supervisorctl root@myhost:~# which supervisord /usr/bin/supervisord root@myhost:~# supervisorctl status # NOTE that there's no output! root@myhost:~# supervisorctl avail root@myhost:~# service supervisor status is running root@myhost:~# ps aux | grep supervisor root 21740 0.1 0.4 40772 10056 ? Ss 11:28 0:00 /usr/bin/python /usr/bin/supervisord root 21749 0.0 0.0 7624 932 pts/2 S+ 11:28 0:00 grep --color=auto supervisor root@myhost:~# cat /var/log/supervisor/supervisord.log 2012-04-26 11:28:22,483 CRIT Supervisor running as root (no user in config file) 2012-04-26 11:28:22,536 INFO RPC interface 'supervisor' initialized 2012-04-26 11:28:22,536 WARN cElementTree not installed, using slower XML parser for XML-RPC 2012-04-26 11:28:22,536 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-26 11:28:22,539 INFO daemonizing the supervisord process 2012-04-26 11:28:22,539 INFO supervisord started with pid 21740 root@myhost:~# ll /etc/supervisor/conf.d/ total 28 drwxr-xr-x 2 root root 4096 2012-04-26 11:31 ./ drwxr-xr-x 3 root root 4096 2012-04-25 18:38 ../ -rw-r--r-- 1 root root 66 2012-04-26 11:31 sample.conf root@myhost:~# ll /opt/sample/ total 12 drwxr-xr-x 2 root root 4096 2012-04-26 11:32 ./ drwxr-xr-x 4 root root 4096 2012-04-26 11:31 ../ -rwxr-xr-x 1 root root 97 2012-04-26 11:32 run.sh* root@myhost:~# python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any help is greatly appreciated!

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • Can't manage iPod from linux anymore

    - by kemp
    I used to be able to see and manage my iPod with different softwares: Amarok, Rhythmbox, GTKPod. The device is a nano 1st generation 4gb. Currently it mounts regularly and can be accessed from the file system, but I get this in dmesg: [ 1547.617891] scsi 11:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 [ 1547.619103] sd 11:0:0:0: Attached scsi generic sg2 type 0 [ 1547.620478] sd 11:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 [ 1547.620494] sd 11:0:0:0: [sdb] 7999487 512-byte hardware sectors: (4.09 GB/3.81 GiB) [ 1547.621718] sd 11:0:0:0: [sdb] Write Protect is off [ 1547.621726] sd 11:0:0:0: [sdb] Mode Sense: 68 00 00 08 [ 1547.621732] sd 11:0:0:0: [sdb] Assuming drive cache: write through [ 1547.623591] sd 11:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 [ 1547.624993] sd 11:0:0:0: [sdb] Assuming drive cache: write through [ 1547.625003] sdb: sdb1 sdb2 [ 1547.629686] sd 11:0:0:0: [sdb] Attached SCSI removable disk [ 1548.084026] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.369502] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.504358] FAT: invalid media value (0x2f) [ 1548.504363] VFS: Can't find a valid FAT filesystem on dev sdb1. [ 1548.945173] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.945179] FAT: invalid media value (0x2f) [ 1548.945182] VFS: Can't find a valid FAT filesystem on dev sdb1. [ 1610.092886] usb 2-6: USB disconnect, address 9 The only application that can access it (partially) is Rhythmbox. I say partially because I can transfer files to the iPod but can't remove or modify them. Also one transfer didn't finish and only 9 out of 16 songs were delivered to the device. All other softwares I tried (GTKPod, Amarok, Songbird) don't even detect it. What can I do to troubleshoot this? EDIT: # fdisk -l /dev/sdb Disk /dev/sdb: 4095 MB, 4095737344 bytes 241 heads, 62 sectors/track, 535 cylinders Units = cylinders of 14942 * 512 = 7650304 bytes Disk identifier: 0x20202020 Device Boot Start End Blocks Id System /dev/sdb1 1 11 80293+ 0 Empty Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 1, 1) logical=(0, 1, 2) Partition 1 has different physical/logical endings: phys=(9, 254, 63) logical=(10, 181, 8) Partition 1 does not end on cylinder boundary. /dev/sdb2 11 536 3919415+ b W95 FAT32 Partition 2 has different physical/logical beginnings (non-Linux?): phys=(10, 0, 7) logical=(10, 181, 15) Partition 2 has different physical/logical endings: phys=(497, 240, 62) logical=(535, 88, 61) EDIT2: The "before" state is hard to tell, it was a lot of updates ago. Haven't been using my iPod for a while so I can't say when exactly it stopped working. I'm sure Amarok was still at version 1.X but can't remember when it was. My current system is debian testing fully updated. NOTE: just noticed that if I mount the device manually instead of letting nautilus automount it, I can see it again on GTKPod but still not on Banshee AND it's vanished from Rhythmbox...

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Total newb having SSH tunnel and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH tunnel into remote MySQL databases, so pardon my ignorance. I'm using Windows 7 and am needing to connect to a remote MySQL instance on a Linux server. For months I had been using the HeidiSQL client application successfully. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 {this is on the SSH settings tab and was defaulted to 3307 by Heidi, changing it to 3306 gives me a different error: SQL Error (1045) in statement #0: Access denied for user 'mySQLusername'@'localhost' (using password: YES)"} MySQL host: theHostnameOfTheRemoteServer.com {consensus seems to be I should use 'localhost' here} MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >