Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 442/537 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Bandwidth Suggestion

    - by Campo
    I have been asked to analyze the bandwidth usage of a company and make a recommendation for upgrading their Internet connection(s). Here is the layout 3 DLS lines so it is 3x(6 Down, 1 Up Each) into a load balancer out to the office's network. 30 VOIP phones run on a T1 (1.5 Down, 1.5 Up) The users at the company are heavily uploading. It is my suspicion that the issue in slowdown is being cause by multiple people uploading and others not being able to get requests out for even simple http requests. My initial idea is to get them a fiber line with a 10 down and 10 up. What do others think on this plan? Will that be enough to host their network traffic? What do I do about the VOIP line afterward? The fiber is expensive and I know the T1 does a great job for their VOIP so I do not want to suggest a DSL line because I know it may not be sufficient. I would also like to save them some money if I can. Maybe even get a faster fiber line and forgo the T1. Though I know their load balance/switch can only handle 20MB/S throughput. Looking for some confirmation/suggestions on my plan. I am planning on going in to get some real diagnostic numbers. Any suggestions on software to use for that? Preferably Windows software.

    Read the article

  • CentOS 5.6: How to resolve php53 RPM dependency conflict with php-mcrypt and php-common?

    - by Stefan Lasiewski
    We are running a CentOS 5.6 system, and want to install php53 with php-mcrypt. However, this introduces a dependency conflict between php-common & php53-common. Does anyone have a good workaround for this problem? host # yum install php-mcrypt Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: linux.mirrors.es.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mcrypt.x86_64 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Processing Dependency: php >= 5.1.6 for package: php-mcrypt --> Running transaction check ---> Package php.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-common.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Running transaction check ---> Package php-cli.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Processing Conflict: php53-common conflicts php-common --> Finished Dependency Resolution php53-common-5.3.3-1.el5_6.1.x86_64 from installed has depsolving problems --> php53-common conflicts with php-common Error: php53-common conflicts with php-common You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest This is apparently a known problem (See php-devel, Bug 700179 and Bug 695708) and this post at the CentOS forums, but there is no official fix yet.

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • Oracle logical standby fails with ORA-01919

    - by DCookie
    I have an Oracle logical standby database being managed via data guard. Just this morning the redo apply process began failing with an ORA-01919 error, indicating one of our application roles did not exist. However, I can see the role on both primary and standby databases. We also have a physical standby that has long since applied the redo where this is happening on the logical, without issue. I have opened an SR with Oracle. I was wondering if anyone out there has seen this before. I guess I should mention: Oracle 10.2.0.4, Win2003 Server SP2. UPDATE: So far, Oracle Support has not provided an answer. I thought I'd post here what I have learned so far. It appears that a grant of DBA on the primary host to a role works fine for users granted the role. It does not work on the logical standby. IOW: create role TEST; grant dba to TEST; grant TEST to auser; connect auser set role TEST; grant <existing role> to <existing user>; This works on the primary instance but fails on the logical. A workaround appears to be to grant each role on the primary to the role TEST with admin option in the logical: grant <existing role> to TEST with admin option; <== do this on the logical standby Then the command works on the logical standby.

    Read the article

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • Degraded RAID-5 array with lvm2 lost superblock and partition table

    - by Fred Phillips
    I have a RAID-5 array of 4x1TB hard disks with one lvm2 partition on Ubuntu Linux 10.04 LTS. One of the disks has failed. I have re-assembled the array without this failed disk but now mdadm --examine claims the array has no superblock and fdisk says it has no partition table. What can I do to recover the data? # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Mar 5 14:43:49 2011 Raid Level : raid5 Array Size : 2930276352 (2794.53 GiB 3000.60 GB) Used Dev Size : 976758784 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Mar 5 15:06:49 2011 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : boba:1 (local to host boba) UUID : 52eb4bc9:c3d8aab5:e0699505:e0e1aa05 Events : 18 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 65 1 active sync /dev/sde1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 17 - faulty spare /dev/sdb1 # mdadm --examine /dev/md0 mdadm: No md superblock detected on /dev/md0. # fdisk -l /dev/md0 Disk /dev/md0: 3000.6 GB, 3000602984448 bytes 2 heads, 4 sectors/track, 732569088 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1572864 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdb1[4](F) sda1[0] sdd1[2] sde1[1] 2930276352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] unused devices: <none>

    Read the article

  • Use Apache authentication to Segregate access to Subversion subdirectories

    - by Stefan Lasiewski
    I've inherited a Subversion repository, running on FreeBSD and using Apache2.2 . Currently, we have one project, which looks like this. We use both local files and LDAP for authentication. <Location /> DAV svn SVNParentPath /var/svn AuthName "Staff only" AuthType Basic # Authentication through Local file (mod_authn_file), then LDAP (mod_authnz_ldap) AuthBasicProvider file ldap # Allow some automated programs to check content into the repo # mod_authn_file AuthUserFile /usr/local/etc/apache22/htpasswd Require user robotA robotB # Allow any staff to access the repo # mod_authnz_ldap Require ldap-group cn=staff,ou=PosixGroup,ou=foo,ou=Host,o=ldapsvc,dc=example,dc=com </Location> We would like to allow customers to access to certain subdirectories, without giving them global access to the entire repository. We would prefer to do this without migrating these sub-directories to their own repositories. Staff also need access to these subdirectories. Here's what I tried: <Location /www.customerA.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerA Require user customerA </Location> <Location /www.customerB.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerB Require user customerB </Location> I've tried the above. Access to '/' works for staff. However, access to /www.customerA.com and /www.customerB.com does not work. It looks like Apache is trying to authenticate the 'customerB' against LDAP, and doesn't try local password file. The error is: [Mon May 03 15:27:45 2010] [warn] [client 192.168.8.13] [1595] auth_ldap authenticate: user stefantest authentication failed; URI /www.customerB.com [User not found][No such object] [Mon May 03 15:27:45 2010] [error] [client 192.168.8.13] user stefantest not found: /www.customerB.com What am I missing?

    Read the article

  • Configuring DNS and IIS for multiple domains on a single server

    - by RichardS
    I might be over complicating this but...I am hosting several websites and dns for the domains on a single server: domain1.net domain1.com domain2.net I have three items which I'm trying to work out whether to achieve by DNS, by IIS hostnames(bindings), or by IIS redirect. 1. Where I have domain1.net and domain1.com, I want everything from both (all emails and web requests) to just point to the domain1.net. Can I do this at the DNS level, or do I have to set up the email as forwarders on the email server and the domain as a hostname in IIS? For example: [email protected] [email protected] www.domain1.com www.domain1.net 2. I want to make sure that requests for domain1.net and www.domain1.net both resolve to the same place. Should this be done with DNS or with multiple hostnames, or with IIS redirects? 3. If I then want to have one webmail site serving all of domains (webmail.domain1.net, webmail.domain2.net), is it best to this with a cname in DNS or with host headers in IIS?

    Read the article

  • open_basedir problems with APC and Symfony2

    - by Stephen Orr
    I'm currently setting up a shared staging environment for one of our applications, written in PHP5.3 and using the Symfony2 framework. If I only host a single instance of the application per server, everything works as it should. However, if I then deploy additional instances of the application (which may or may not share the exact same code, dependent on client customisations), I get errors like this: [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Warning: require(/var/www/vhosts/application1/httpdocs/vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php): failed to open stream: Operation not permitted in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '/var/www/vhosts/application1/httpdocs/app/../vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 Basically, the second site is trying to require the files from the first site, but due to open_basedir restrictions it can't do that. I'm not willing to disable open_basedir as that is only masking the problem instead of solving it, and creates a dependency between applications that should not be present. I initially believed this was related to a Symfony2 error, but I've now tracked it down to an issue with APC; disabling APC also solves the error, but I'm concerned about the performance impact of doing so. Does anyone have any suggestions on what I might be able to do?

    Read the article

  • SSD seems dead after wakeup from Windows Sleep, BIOS stalls but doesn't find it anymore

    - by Abel
    The morning, the following scary scenario happened: I woke up my Windows system Typed in my username and got an error (something like "could not load security xxx", but unsure of exact wording) System auto-restarted after cliking OK It didn't boot up anymore to the SSD with Windows 7 OS (I have another disk I can boot to, but that doesn't see the disk either). Obviously, this happened right after I instantiated a backup procedure, which hasn't succeeded either. The BIOS can't find the drive when I connect to SATA. And it can't find the drive when I connect it to SAS. I have a Dell Workstation T7400, most recent BIOS (version A06), version of SAS Host Bus Adapter BIOS (HBA) is MPTBIOS 6.14.10.00 (2007.09.29) from LSI Logic Corp. Other findings: When connecting to SATA, the DELL Logo screen stays really long (5 minutes) and then at the end of POST it says that a drive is not found When connecting to SAS, the SAS HBA initializing phase takes long (2 minutes, against normally 15 seconds) When running Dell Diagnostics, it doesn't finish and gives the error Exception occurred in module MPCACHE.MDM file "IOAPICSP.ASM" line 1645. I contacted Dell. On their advice I tried different slots and different cables to no avail. I use an APIC battery power, spikes in the power are thus unlikely. My conclusion so far: the disk is dead. I need this disk very badly because it contains the last few days of important development of which not all code was checked in the moment this happened. Are there any ways to recover dead SSD drives? The drive is a new X25-M G2 160GB model SSDSA2M160G2GC 2.5" in an extension bay and has been running without issues for 3 months on SAS.

    Read the article

  • Overriding vhost.conf to always allow PHP include access to directory

    - by Jeremy Dentel
    My predecessor in my job developed a simplistic newsletter system for our school's newspaper utilizing PEAR's Mail package. As I grow this system (and our site) we are constantly stuck with Plesk rewriting the vhost.conf file in which the PEAR include path has been manually entered. This has become an unwieldy task to actually manage and keep running. There's been a "note" from both the previous developer and I to attempt to solve this problem, but we can't entirely figure it out. I'm attempting a move to cPanel through another host, so hopefully it'll go away there, but until then, it can be tedious extremely difficult to get a solid uptake of the system without constant "web-presence." I've searched around and haven't found a solution. I'm rather new to the server management scene (command line was non-existant till around a year ago. =/), so I haven't found anything. Any help would be useful. "Similar Questions" popped this up, but it still seems to rely on vhost.conf, and will still allow changes within Plesk to overwrite the changes.

    Read the article

  • Send Apache Access Logs to syslog

    - by Seer
    We have IBM HTTP Servers (Based on Apache 2.0) and want to send the access logs to syslog. (in addition to error logs which does work) The config we are using is as follows: ErrorLog "|/HTTPServer/bin/rotatelogs /archive/http/error_log.%Y%m%d 86400 | /usr/bin/logger -t httpd -plocal6.err" LogLevel warn LogFormat "%h %{True-Client-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D \"%{Host}i\" %v" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog "|exec /usr/bin/logger -t ptseelm-ax3004 -i -p local6.notice" combined But the logs entries don't even appear in the local syslog.out here is what the processes look like: ps -ef | grep httpd apache 6226000 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start apache 6750220 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start apache 7602390 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start root 8388618 1 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start root 9044038 8388618 0 09:04:01 - 0:00 /usr/bin/logger -t httpd -plocal6.err So there is no logger attached to the child processes... is that the problem? Can someone help me out? :) We have the following in syslog.conf: local6.* @somerealipaddress

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • finding the best network latency between two countries

    - by Yoav Aner
    I know there are many tools to test for bandwidth and latency, but they all rely on having at least one host from which you can run those tests. I wonder whether there's an online source or some other way to guestimate the latency or speed between two countries (in general). For example, would a customer in Japan get lower latency if the server is located in Singapore or Australia? Is a user in India likely to get higher download speed from a server in the UK or in the US? Are there any online resources or some clever ways to answer those questions with a reasonable degree of accuracy? [UPDATE]: Thanks for the great suggestions from Raffael Luthiger. I didn't know about those looking glass servers. The submarine cable maps were also really cool to discover (Thanks to Jesper Mortensen). Also seems really wise if I could ask those network professional in the area for their experience, but obviously I don't have access to those. At least some of them are on SF :) However, I'm still a little unsure how to combine those resources to give me some measurements. This is the information I have: Two countries (A,B). I do have IP addresses of customers in country A (I can obtain those from the web server log files for example). Presumably I can find some looking glass servers in country B and run a trace to those IPs. What's the best measurements to use? Are there any scripts that help automate at least some of this process?

    Read the article

  • Hyper-V VM Lab + RRAS + RDP

    - by Dennis Evans
    My background is primarily .NET Development with some System Administration skills. I'm trying to set up a VM Lab for me to test System Applications I'm developing but I've only ever done System Administration in already set up environments; I've never set up my own. My current setup: Server 2008 R2 Hyper-V Host on physical machine (only role enabled) with two NICs. First NIC dedicated for Management w/ DHCP address from company's network. Second NIC dedicated to RRAS VM w/ DHCP address from company's network. RRAS VM has two NICS, one is virtual private internal only NIC w/ static entry. The other is the physical NIC mentioned above. I've joined it to my VMLab.net internal domain. My Active Directory Domain Controller server (ADCT) also runs DNS, DHCP, and Certificate Services which I'm familiar with but don't understand completely. RRAS is already set up with NAT to provide the private internal network with Internet access. What I would like to do is be able to RDP into the servers/computers on the VMLab.net domain from my computer. Do I need to add the Remote Desktop Services role and enable the Remote Desktop Gateway service on RRAS in order to do this or is there a way to set up port forwarding on RRAS to just allow a direct connection to the internal servers...or both? What would the best practices be here? Network Diagram http://i.stack.imgur.com/4qfnk.png

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • Write Fedora.iso to USB and boot it from a Macbook

    - by MTilsted
    I have an .iso image of the full Fedora 16 install (Downloaded from http://fedoraproject.org/en/get-fedora-options#formats as "Fedora 16 DVD") and the question now is: How do I write it on a USB stick, so I can install it on my Mac book? I tried using DD as the install guide said, and that gave me a USB stick which can boot from my PC. But it can't boot from the Mac (The Mac start menu don't show it as a boot option). Edit: I downloaded a live install image, and did this (SSD is my USB 4GB thing) /sbin/mkdosfs -F 32 -n usbdisk /dev/dev/sdd1 sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora-16-i686-Live-KDE.iso /dev/sdd1 And this produced an image which can boot on my pc but not on my mac. This seems to indicate that the --efi is not working, because if it really was EFI it would not boot on a normal pc, would it? I then tried this: (Difference being that I write the image directory to /dev/sdd instead of /dev/sdd1) but this still will not boot on the Mac (it newer shows up at the startup screen on the Mac). sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora- PS: My host Linux is Fedora 13.

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >