Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 247/328 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • VPN on OSX disconnects after precisely 2 minutes and 30 seconds on specific network

    - by Tyilo
    When connecting to my own VPN server on a specific network, called public-network, my Mac disconnects the VPN connection after 2 minutes and 30 seconds. I have performed several tests and this is the result: It works fine until the 2:30 mark It doesn't matter which Mac I use, it still disconnects It doesn't matter which client I use, all of the following does the same: OSX system client, HMA! Pro VPN and Shimo It doesn't matter which protocol I use, at least all of these protocols does the same: PPTP, OpenVPN and L2TP over IPSec The same thing happens using my own VPN server and HMA!'s VPN server. All other clients (Windows/iPhone) can use any of these VPN servers and protocols without problem on public-network On OSX, all the protocols, clients and servers works fine on any other network So it seems that it is the combination of OSX, VPN & public-network that causes this. This is the syslog from my VPN server, when the disconnection happens: Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: EOF or bad error reading ctrl packet length. Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: couldn't read packet header (exit) Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: CTRL read failed Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: Reaping child PPP[31401] Feb 2 12:04:32 raspberrypi pppd[31401]: Hangup (SIGHUP) Feb 2 12:04:32 raspberrypi pppd[31401]: Modem hangup Feb 2 12:04:32 raspberrypi pppd[31401]: Connect time 2.5 minutes. Feb 2 12:04:32 raspberrypi pppd[31401]: Sent 3963649 bytes, received 362775 bytes. Feb 2 12:04:32 raspberrypi pppd[31401]: MPPE disabled Feb 2 12:04:32 raspberrypi pppd[31401]: Connection terminated. Feb 2 12:04:32 raspberrypi pppd[31401]: Exit. Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: Client <ip-adress> control connection finished

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • How can I remotely tell what brand/model internal SCSI card is installed in a machine?

    - by edmicman
    I am doing some consulting work for a previous employer upgrading and migrating old servers to new hardware. There is an existing file server (HP ProLiant DL380) that has an tape backup drive connected; it is using a SCSI interface and I'm pretty sure it's using an internal SCSI card. They are upgrading to a new server hardware (HP ProLiant DL160 G6). The old server is 2U, the new one 1U and we would want to move the tape drive to the new server, too. I'm trying to figure out if the SCSI card in the old server would be able to be installed in the new one or if we'll need to source a new card; mostly I don't know for sure the height of the card and if it's low-profile enough that it would fit in the new server. There is not much of a technical resource onsite and the old server is in-use anyway so I would like to avoid making a trip in myself or trying to have someone onsite pop open the case and tell me what card is there. It's running Windows Server 2003 - is there a way to tell from say Device Manager what make and model the SCSI card might be? Or any other system diagnostic program or something that would give me hardware info like that? Thanks for any info!

    Read the article

  • Easiest way to send encrypted email?

    - by johnnyb10
    To comply with Massachusetts's new personal information protection law, my company needs to (among other things) ensure that anytime personal information is sent via email, it's encrypted. What is the easiest way to do this? Basically, I'm looking for something that will require the least amount of effort on the part of the recipient. If at all possible, I really want to avoid them having to download a program or go through any steps to generate a key pair, etc. So command-line GPG-type stuff is not an option. We use Exchange Server and Outlook 2007 as our email system. Is there a program that we can use to easily encrypt an email and then fax or call the recipient with a key? (Or maybe our email can include a link to our website containing our public key, that the recipient can download to decrypt the mail?) We won't have to send many of these encrypted emails, but the people who will be sending them will not be particularly technical, so I want it to be as easy as possible. Any recs for good programs would be great. Thanks.

    Read the article

  • Is it ever good to share a userid?

    - by Ladlestein
    On Un*x, Is it ever a good idea to have one userid that many different people log into when they do stuff? Often I'm installing software or something on a Linux or BSD system. I've developed software for 24 years now, so I know how to make the machine do what I want, but I've never had responsibility for maintaining a multi-user installation where anyone really cared about security. So my opinions feel untested. Now I'm at a company where there's a server that many people log into with a single userid and do stuff. I'm installing some software on it. It's not really a public-facing server, and is only accessible via VPN, but it's used by many people nonetheless, to run tests on custom software, things like that. It's a staging server. I'm thinking that at the very least, using a single user obscures an audit trail, and that's bad. And it's just inelegant, because people don't have their own spaces on the server. But then again, with more userids, maybe there's a greater chance that one can be compromised, allowing attackers to gain access. ?

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • Which IP should I use for my nameserver

    - by Luke Bream
    Sorry to ask whats probably very obvious question. I have just got a new server that is fantastically cheap but unfortunatly doesnt come with any technical support and Im very out of my depth ! My hosting company has provided the following information... Below you will find your additional IP addresses added to the server 5.9.36.51 Please note that you can use the subnet only for this server. IP: 5.9.225.64 /27 Mask: 255.255.255.224 Broadcast: 5.9.225.95 Useable IP addresses: 5.9.225.65 to 5.9.225.94 It has cPanel with WHM and im going through the setup... I have a number of questions... my domian is purchased from godaddy and I want to use it as the name server. Question 1: Which IP or IP's do I enter into the godaddy interface for ns1.mydomian.com ns2.mydomain.com Question 2: In the WHM nameserver setup what do I enter for... Please enter an IP address for each of your nameservers. ns1.mydomain.com ?????? ns2.mydomain.com ?????? Add "A Entries" for Hostname IP for Entry: ????????? Thanks for aany help you can give Luke

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • sendmail relay status

    - by Andy
    Hello all, I have a RHEL3 server with sendmail configured to relay mail to: # "Smart" relay host (may be null) DSmailrelay This relay server is an exchange server not administered by me. A few days ago its IP address was changed without my knowledge so I've updated the correct ip in /etc/hosts for the mail relay entry. Unfortunately no mail is currently going through and maillog reports: Oct 26 14:32:39 fsimag sendmail[12580]: n9Q3VxPA012580: from=root, size=3685, class=0, nrcpts=1, msgid=<~R.*.2009102614315955@*>, relay=root@localhost Oct 26 14:32:39 fsimag sendmail[12580]: n9Q3VxPA012580: to=wodwest@*.net, delay=00:00:40, mailer=esmtp, pri=33685, dsn=4.4.3, stat=queued Oct 26 14:36:09 fsimag sendmail[13670]: n9Q3ZTcf013670: from=root, size=5831, class=0, nrcpts=1, msgid=<~R.medicus.2009102614352914@*>, relay=root@localhost Oct 26 14:36:09 fsimag sendmail[13670]: n9Q3ZTcf013670: to=tsgastro@(.net, delay=00:00:40, mailer=esmtp, pri=35831, dsn=4.4.3, stat=queued Oct 26 14:36:50 fsimag sendmail[13882]: n9Q3aAxj013882: from=root, size=5830, class=0, nrcpts=1, msgid=<~C.medicus.2009102614361009@*>, relay=root@localhost Oct 26 14:36:50 fsimag sendmail[13882]: n9Q3aAxj013882: to=elmwood@*.net, delay=00:00:40, mailer=esmtp, pri=35830, dsn=4.4.3, stat=queued (With domains obscured) The mailq command shows nothing, and I've also tried connecting to this new mail server via telnet and manually sending and reports as being queued but not sent. The administrator of this machine has put it back to me saying he sees no problems, and I just want to cover everything before passing it back to him. Is there any other tests/logs/reasons for sendmail to only report it as "stat=queued" ? I've looked in previous logs and the relay is set to root@localhost in those but none were ever set to queued. Thanks for any help, Andy

    Read the article

  • Different versions of iperf for windows give totally different results

    - by Albert Mata
    Measuring TCP output from a Windows client to Solaris server: WXP SP3 with iperf 1.7.0 -- returns an average around 90Mbit Same client, same server but iperf 2.0.5 for windows -- returns an average of 8.5 Mbit Similar discrepancies have been observed connecting to other servers (W2008, W2003) It's difficult to get to some conclusions when different versions of the same tool provide vastly different results. Example below: C:\tempiperf -v (from iperf.fr) iperf version 2.0.5 (08 Jul 2010) pthreads C:\tempiperf -c solaris10 Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte (default) [ 3] local 10.172.181.159 port 2124 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.2 sec 10.6 MBytes 8.74 Mbits/sec Abysmal perfomance, but now I test from the same host (Windows XP SP3 32bit and 100Mbit) to the same server (Solaris 10/sparc 64bit and 1Gbit running iperf 2.0.5 with default window of 48k) with the old iperf C:\temp1iperf -v iperf version 1.7.0 (13 Mar 2003) win32 threads C:\temp1iperf.exe -c solaris10 -w64k Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte [1208] local 10.172.181.159 port 2128 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [1208] 0.0-10.0 sec 112 MBytes 94.0 Mbits/sec So one iperf with a 64k window says 8.75Mbit and the old iperf with the same window size says 94.0Mbit. These results are constant through repeated tests. From my testing launching iperf(old) with window size "x" and iperf(new) with window size "x" instead of producing the same or very close results produce totally different results. The only difference I see is the old compiled as win32 threads vs. pthreads but parallelism (-P 10) appears to work in both. Anyone has a clue or can recommend a tool that gives results I can trust?? EDIT: Looking at traces from (old) iperf it sets the TCP Window Scale flag to 3 in the SYN packet, when I run the (new) iperf this is set to 0 in the initial packet. A quick analysis of the window size through the exchange shows the (old) iperf moving back and forth but mostly at 32k while the (new) iperf mostly keeps at 64k. Maybe it will help somebody to connect the dots.

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address [email protected] through that, and I also have a Gmail account [email protected]. Now, I've been trying to send emails to a particular recipient who shall be known as [email protected]. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others! See: How can I switch the function and control keys on my laptop? and Intercepting the Fn key on laptops) Numerous folks indicate that Windows doesn't register the Fn key as a keypress but using Mihov ASCII Master 2.0, that gives the ASCII value of a keypress, I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here.

    Read the article

  • Perl EPIC Not recognising installed CPAN modules

    - by Recc
    Eclipse on a mac, was working fine adding new modules until I Installed Text::CSV_XS which Eclips doesn't recognise as added to @INC For instance use strict; use SOAP::Transport::HTTP; SOAP::Transport::HTTP::CGI->dispatch_to('C2FService')->handle; BEGIN { package C2FService; use vars qw(@ISA); @ISA = qw(Exporter SOAP::Server::Parameters); use SOAP::Lite; sub c2f { my $self = shift; my $envelope = pop; my $temp = $envelope->dataof("//c2f/temperature"); return SOAP::Data->name( 'convertedTemp' => ( ( ( 9 / 5 ) * ( $temp->value ) ) + 32 ) ); } } use SOAP::Transport::HTTP; is marked as error if I comment it out use SOAP::Lite; is in turn marked as an error, not found etc the usual if a module is not installed. Both are installed with CPAN and $ perl -c soap-test.pl post-code-check.pl syntax OK Perl is fine CPAN tests are all pass, the code works, only EPIC lags behind. $ pwd && ls /opt/local/lib/perl5/site_perl/5.12.4/SOAP Client.pod Lite Server.pod Constants.pm Lite.pm Test.pm Data.pod Packager.pm Trace.pod Deserializer.pod SOM.pod Transport Fault.pod Schema.pod Transport.pod Header.pod Serializer.pod Utils.pod And if I have use errors in the start of my files the rest of the source is not error checked..

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 120

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • cPanel web server redundancy advice?

    - by crgnz
    At present I operate a (reasonably low volume) web-hosting service with a Centos 5.3 server running cPanel/WHM. I would like to implement a level of redundancy such that in the event of server failure, I can restore service with a minimum of effort in less than 60 minutes. I also want to setup a secondary DNS that cPanel will replicate with. My current idea is to kill two birds with one stone by: My current server is called "www1" Purchase an identical server (HP DL360 G4) with mirrored disks. Call this server "www2" Install Centos 5.4 (or perhaps I should install 5.3 to be identical with www1) Install cPanel/WHM on this server and fully license it Setup www1 and www2 cPanel to replicate DNS with each other Setup a nightly replication script that does the following: a) rsync's the /home directory from www1 to www2 b) dumps all MySQL databases on www1 and copies them to a temp folder (with root access only) on www2 c) triggers a script to run on www2 that restores the MySQL dumps Thus each night a fully working copy of all the websites and MySQL databases is copied to www2. I do not have enough knowledge of MySQL replication to understand if it works safely and transparently with cPanel. Thus I propose the mysql dump/copy/restore due to not knowing any better! In the event that www1 dies a horrible death, I envisage that I could login to www2, change the IP addresses to those that www1 had, and presto, the websites are available again. The advantage of this idea is that it is fairly simple and "low tech" and thus does not require an expert sysadmin to setup and monitor (I am NOT an expert sysadmin) The disadvantage of this idea is that up to a full days worth of data changes would be lost. I think this would be acceptable to the sorts of customers I host at the moment. The other disadvantage would be having to pay for a full cPanel license, but I am comfortable with that cost, so for now all I want to discuss are technical considerations. Is this a sound scheme?

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >