Search Results

Search found 6258 results on 251 pages for 'trouble'.

Page 165/251 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Why can't this user connect to domain share?

    - by Saariko
    Part of my reorganizing credentials in the domain, I have created several users that will be used solely for services (backup, LDAP, etc) The idea is that systems that need specific usage will use a user/service user, that will give them what they need. However, I am having trouble setting the correct needed data. For this example, I have a NAS (Ready NAS 1100 by Netgear), that runs it's own backup jobs. The job reads from a domain share: \domain\qa and copies all data to another location. When using the domain\administrator everything works. When I input the domain\srv.backup user I get an error connecting to the folder. The srv.backup is part of the 'Domain Admins' group, which is a member of 'Administrators' I thought there might be propagation issues, but even when the srv.backup user was a direct member of 'Administrators' the error still occurred. I have 2 DC's (W2K8R2 replicas) - I thought that could also cause a problem, AFAIKT it's not the issue. Sharing permissions are open to everyone The Security on the folder is as follow This is the test window from the NAS dashboard I doubled check that the 'srv.domain' is part of the 'Domain Admins' group As well as tried with a simple 1-9 password. What else do I need to check? thanks.

    Read the article

  • Winamp question: Generating 'dynamic playlists' from file playlists -OR- mass-tagging by file playli

    - by Daddy Warbox
    I'm trying to think of a way to do this. I sort my songs into a variety of playlists corresponding to different 'moods' I might have as I listen to them, and some songs fit for more than one kind of mood (e.g. a jazz song might be 'stylish' and 'emotional', or something to that effect). I also give them star ratings for a general sort of opinion about them. I want to be able to filter and sort my media library by the moods I want or don't want, as well as by star rating. Anyone have a good way to do something like this? I can't seem to use Winamp's dynamic playlists to generate lists from existing filesystem playlists (e.g. songs in a given .m3u files). Hand-tagging files with Winamp's tag editor is a royal pain. It's trouble enough just giving a star rating and sorting into playlists as is. If there is there a way to mass tag songs within each playlist with mood words to allow me to create dynamic playlists, I'd be fine (for now). It'd be nice if I could do this via some kind of hotkey for each song, too. I'm looking to see if I can use a macro program or something to do that, though. Thanks in advance. P.S: Alternatively, would something like Foobar have functions like this? Note: Italics are recent edits.

    Read the article

  • Very strange networking problem in all computers in my house

    - by Anthony
    I have three computers in my house: One desktop (wired), and two laptops (wireless). I'm using Cox Communications (yes they suck), and yesterday they had a major outage. I know it was them because I called them up when I started losing connection to the internet. All the computers can connect just fine, but they don't have internet access. It just says "local only". The weird thing is, some of them work occasionally. For the first day my laptop was working perfectly, while all the other computers couldn't connect. Later on in the day it got reversed, and the desktop was the one with internet access. By the second day the problem on Cox's end was fixed, but we still had no access. I called them up and they reset my modem, and did the usual troubleshooting stuff. It never fixed the problem, but we found out that the problem had to do with conflicting IP addresses. My router was a Linksys WRT54G and it was about 5 years old. I figured it might have gotten damaged from the outage since it was so old, and now it's having trouble "fixing itself" and giving out the proper IP addresses. So I bought a new router, a Cisco Linksys E1000. I set everything up, and still the same problem. My computer has access right now (that's how I'm writing this), but no other computers seem to be able to get access. Is there possible damage to the modem? Can someone help me please? Sorry for this being so long.

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • Multiple Internet connections, multiple networks and split access in Linux

    - by Swapneel Patnekar
    I am having trouble setting up multiple internet connections for split access in Linux. We have 3 internet connections from 3 different ISP's. We want to configure our Linux gateway machine such that our three internal networks 10.2.1.0/24, 192.168.20.0/24 & 192.168.2.0/24 use ISP1, ISP2 and ISP3 respectively in a split access manner. Outlined below is the layout/settings, Interfaces of the Linux Gateway connected to Routers: eth0: 10.1.1.2<---------->10.1.1.1(Internal Interface of ADSL Router)[ISP1] eth1: 192.168.15.2<------>192.168.15.1(Internal Interface of 3G Router)[ISP2] eth3: 192.168.1.2<------->192.168.1.1(Internal Interface of ADSL Router)[ISP3] Kindly note that none of the interfaces in the Linux gateway has a public static IP address. Routers of ISP1 and ISP2 get assigned a dynamic public IP address when connected to the Internet, router of ISP3 has been assigned a public static IP address. Interface of Linux gateway connected to a switch, eth4: 10.2.1.1(LAN Interface for ISP1) eth4:0 192.168.20.1(LAN interface for ISP2) eth4:1 192.168.2.1(LAN Interface for ISP3) eth4:0 & eth4:1 are virtual interfaces with eth4 being the interface connected physically. Based on http://linux-ip.net/html/adv-multi-internet.html I've set the following routes, ip route flush table 4 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table 4 $ROUTE done ip route add table 4 default via 192.168.15.1 ip rule add fwmark 4 table 4 ip route flush cache Additionally, using the following iptables rules to mark & route packets as per the guide mentioned above : http://pastebin.com/KzWHFGJA At this point, computers from 192.168.2.0/24 network are successfully able to reach the Internet through ISP3. 192.168.20.0/24 and 10.2.1.0/24 are unable to access the Internet through ISP1 and ISP2 respectively. Any inputs will be much appreciated !

    Read the article

  • Linux Mounting Problem

    - by Sam
    I have an Iomega Network Attached Storage device on my Windows network. I am trying to use a clonezilla live USB flash drive to backup my netbook to my Iomega Network Attached Storage device. The clonezilla USB flash drive runs linux. I'm having trouble getting the Network Attached Storage unit to mount using the following command: mount -t cifs -o username="myUsername" //192.168.1.100/backup /home/partimg The response from linux is: [134.730738] CIFS VFS: cifs_mount failed w/return code = -6 retrying with upper case share name [134.788461] CIFS VFS: cifs_mount failed w/return code = -6 mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I also tried adding the following to my username: username="myUsername,domain=workgroup" but that did not change the error. I am able to ping the network attached storage unit from linux on my netbook. I also booted from a Slax Live USB Flash Drive and Slax auto-mounted my network attached storage unit via Samba. Unfortunately, I don't believe that I can run clonezilla from inside the Slax installation. Does anyone have any insight about what is wrong with my mount statement? Or is there something peculiar about Iomega drives which makes this impossible?

    Read the article

  • Asterisk, IAXModem & Hylafax how-tos?

    - by Brian Postow
    I'm trying to set up Asterisk and IAXModem to send faxes via T38 (Yes, I know I'm swatting a fly with a Buick...) However, since I'm trying to do something so small with a product so large, I'm having trouble finding samples or how-tos that show me how to set this up. I've got all three installed, and I THINK I have my IAXModem config correct. I'm pretty sure that I have Hylafax correct (I've used it with T38Modem) so, I need to know which of the Asterisk samples I need to use, and how to use them. I think I want to use some combination of iax.conf, iaxprov.conf, sip.conf and sip_notify.conf. But I'm not sure where to put them, or what to change... I'm sure that the answer is RTFM, but I'm not sure WHICH M, or where in it to R... thanks. EDIT On a mailing list, someone told me that this actually WON'T WORK because IAX doesn't do T38. So, is there some other way to get Asterisk to work with Hylafax and send T38? I know that Asterisk does T38, the question is how to get the data from Hylafax and back...

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • Compiled ruby fails to find curses

    - by Hamish Downer
    I'm attempting to install the sup MUA but I'm having trouble. When I try to run it, it can't find curses: /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- curses (LoadError) ... I am installing on a server running CentOS 5. I have compiled ruby and rubygems from source, and then installed sup using rubygems. I followed this article to compile ruby. I have found having a similar problem on ubuntu. The fix suggested there is to install libcurses-ruby, but I can't find a similarly named package in CentOS. I have installed the ncurses-devel package, as that was required for installing sup using gem. I have also installed the ncurses, cursesx and rbcurse gems, but none of these have fixed the problem. The article above about compiling ruby said you had to recompile the zlib extension, after doing: cd ext/zlib sudo ruby extconf.rb --with-zlib-include=/usr/include --with-zlib-lib=/usr/lib cd ../.. sudo make sudo make install So I've tried a few variants in ext/curses. The top few lines of ext/curses/extconf.rb are require 'mkmf' dir_config('curses') dir_config('ncurses') dir_config('termcap') So I've tried a few variants of setting paths: sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-ncurses-include=/usr/include --with-ncurses-lib=/usr/lib --with-termcap-lib=/lib sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-termcap-lib=/lib and re-doing the make, but to no avail as yet. Any ideas to move it forward are welcome.

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Installing/enabling PHP Pecl Intl extension on CentOs 5

    - by Marijn Huizendveld
    Original question: I'm having trouble installing the PHP Pecl Intl extension on my CentOs 5 machine. After installing both icu and libicu with the following commands: $ yum install icu $ yum install libicu I tried to install the Intl extension like so: $ /usr/bin/pecl install intl I selected to search for the default location for the ICU libraries and header files. It ends up crashing like this: checking whether to enable internationalization support... yes, shared checking for icu-config... no checking for location of ICU headers and libraries... not found configure: error: Unable to detect ICU prefix or no failed. Please verify ICU install prefix and make sure icu-config works. ERROR: `/tmp/pear/temp/intl/configure --with-icu-dir=DEFAULT' failed update After successfully installing the development version of icu as suggested by RusAlex (thanks RusAlex) like so: $ yum install libicu-devel I ran into a new problem which I also encountered locally the following command: $ /usr/bin/pecl install intl now produces this error: /private/tmp/pear/temp/intl/collator/collator_class.c:92: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:96: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:101: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:107: error: duplicate 'static' make: *** [collator/collator_class.lo] Error 1 ERROR: `make' failed It appears to have something to do with PHP 5.3 being bundled with Intl already. But how can I enable this extension, if I look in my PHP Info than I cannot find any reference to it...

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Verizon 4G LTE vs. a LAN

    - by n8wrl
    I have been having quite a bit of trouble getting my new Verizon 4G-LTE service running on a Windows 7 desktop. My desktop is on a LAN here at home with two other PC's. We all share printers, files, media, etc. Until yesterday, we also shared a Verizon 3G modem via a NetGear 3G broadband WAP. That isn't compatible with the 4G so now I am just trying to get the 4G modem working directly connected to one of the desktops. After some USB wrangling, it seems to work. Except, every 7-10 minutes the connection would drop. After some time on the phone with a very nice Verizon technician, it seems to be staying up - it's been up for 20 minutes now. He told me that my LAN was causing the 4G to drop. That traffic on my LAN, even though it is not destined for the internet (ICS not working yet) was causing the cell tower to detect an 'IP change' and a 'security violation' in my modem and drop my connection. Is this Verizon's way of forbidding more than one computer to share a modem? I have my computer running now without a LAN connection and the 4G is still up. But this isn't practical. Has anyone heard of this?

    Read the article

  • Overheating Toshiba Satellite L300

    - by ldigas
    A coleague of mine's having trouble with his new Toshiba Satellite L300 ... I tool a kook at it, and indeed it's hot as hell. I couldn't hold my hand on it for too long. He says it also has a tendency to turn itself off (WinXP 32bit running) with no forewarning. Hasn't happened to me while I was using it, but that wasn't long anyways. The first guess was it was too dirty ... problem is it's new, came out of a package a quarter of a year ago. Kept in a clean environment (office). Looks clean. No dust in sight. Second guess is that the fan wasn't working properly, cause indeed it has intervals of working, and non working. But when I listen to it, it sounds like normal usage. I took a SpeedFan measurement, and it reports temp. up to 85 Celsius ... which is definitely too high. Anyone knows what else I could do to it ? It is under warranty and it will go to the service, but I thought if there is something we can do, as to avoid carrying it there / be without it for a week ...

    Read the article

  • Changes to grub in ubuntu 10

    - by jdege
    I've been running CentOS 5 for some years. I've decided to upgrade to Ubuntu, and with 10.04 just out, this seemed like a good time. I'm a tad paranoid, so I started off with a new set of drives - one to install on, one to backup to, and one as a spare. I removed my existing CentOS 5 drives, and did an install, and had no problems. I installed the server version, and used the default full-disk LVM installation. Next, I copies my backup scripts over, edited them to work with the new configuration, and did a test backup. That worked fine, as well. Then comes the real test, could I do an install of the backup onto the spare drive? (I won't put anything of importance on a system that doesn't have a reliable backup, and if I've never done a restore, it's not reliable.) I booted from a System Rescue CD (ver 1.5.3), with the spare drive as /dev/sda, and the backup drive as /dev/sdb. I had no trouble in partitioning, configuring LVM, formatting, making swap, or restoring the file systems. But when I got to restoring grub to the MBR, I ran into problems. My restore instructions from CentOS 5 said run grub, then enter two commands: root (hd0,0) setup (hd0) The first command exits with an error: "Checking if /boot/grub/stage1 exists ... no" I did some googling around, and found that the Grub2 included in recent Ubuntus is very different than the Grub 0.97 included in CentOS 5. One site suggested I use: grub-install --root-dir=/mnt/restore /dev/sda That appeared to work, but when I booted from the drive, I ended up at a grub prompt. Any ideas as to what I need to do? It seems like a simple problem, but my attempts at searching out answers on the web are being swamped by references to the old version of Grub. Help would be appreciated.

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >