Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 559/884 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Setting up a Pagefile and Partition in Server 2008

    - by Brett Powell
    I am setting up 18 new machines for our company, and I have instructions from my new boss on setting up a Pagefile and Partition. I have looked at their existing machines to base the new setups off of, but there is no consistency between any 2 machines, which has left me extremely frustrated to say the least. My instructions are... 1) Set a static pagefile (use recommended value as max/min), set it on SSD if SSD available. 2) Make 3 partitions: C: is used for OS and install files D: is used for backups on machines with a SSD. On machines without SSD create a D: partition for pagefile (2*installed RAM for partition size) E: must be the partition hosting user files I have never messed with Pagefiles before, and looking at their existing machines is offering no help. My questions are... 1) As the machines I am setting up have no SSD (just 2 SATA drives) does it sound like the Pagefile should be setup on the C: (primary) drive or the D:? The instructions are vague so I have no idea. 2) As C: and D: are both Physical drives, does it sound like C: should be partitioned out to create the E: drive or D:? Thanks for any help I can get. I am extremely stressed out under a massive workload right now, and these vague instructions are quite infuriating.

    Read the article

  • iptables configuration to work with apache2 mod_proxy

    - by swdalex
    Hello! I have iptables config like this: iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -j ACCEPT Also, I have apache virtual host: <VirtualHost *:80> ServerName wiki.myite.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8901/ ProxyPassReverse / http://localhost:8901/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> My primary domain www.mysite.com is working well with this configuration (I don't use proxy redirect on it). But my virtual host wiki.mysite.com is not responding. Please, help me to setup iptables config to allow wiki.mysite.com working too. I think, I need to setup iptables FORWARDING options, but I don't know how. update: I have 1 server with 1 IP. On server I have apache2.2 on 80 port. Also I have tomcat6 on 8901 port. In apache I setup to forwarding domain wiki.mysite.com to tomcat (mysite.com:8901). I want to secure my server by disabling all ports, except 80, 22 and 443.

    Read the article

  • Setting "Register this connection's addresses in DNS" using GPO

    - by ChamaraG
    Hi All, I need to get the Windows XP client machines in my network to dynamically update their DNS A records. The network is an AD domain running on Windows Server 2003 R2 servers with Win XP SP3 clients. Some machines already have the "Register this connection's addresses in DNS" check box checked and sucessfully update the DNS server. But some machines do not have this check box set and I need to set this. I read that this is possible using a GPO and I enabled the following: Computer configuration - Administrative templates - Network - DNS client Primary DNS Suffix Dynamic Update DNS Servers Connection-Specific DNS Suffix Register DNS records with connection-specific DNS suffix and where required, entered the relevant parameters. Running rsop.msc in the client machines shows that the GPO has been applied. The client machines have been rebooted. The DNS server allows "Nonsecure and secure" dynamic updates and is only accessible from our internal network. But, the "Register this connection's addresses in DNS" check box is not set. And the hosts without this set are not updating their DNS A records. Per another suggestion in a web site, i tried running "ipconfig /registerdns", but it does not add the DNS A record. Any advice on what I am doing wrong here would be gratefully accepted :-) Thank you.

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • How do I get my ubuntu server to listen for database connections?

    - by Bob Flemming
    I am having a problems connecting to my database outside of phpmyadmin. Im pretty sure this is because my server isn't listening on port 3306. When I type: sudo netstat -ntlp on my OTHER working server I can see the following line: tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 20445/mysqld However, this line does not appear on the server I am having difficulty with. How do I make my sever listen for mysql connections? Here my my.conf file: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql #skip-networking=off #skip_networking=off #skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 0.0.0.0 # # * Fine Tuning # key_buffer = 64M max_allowed_packet = 64M thread_stack = 650K thread_cache_size = 32 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 2M query_cache_size = 32M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • S5000XVN Intel Motherboard - Only sees 12 out of 16GB of RAM

    - by Richie086
    So I have an older Intel S5000XVN motherboard, here are the specs CPU Arch : 2 CPU - 2 Cores - 2 Threads CPU PSN : Intel Xeon CPU 5160 @ 3.00GHz CPU EXT : MMX, SSE (1, 2, 3, 3S), EM64T, VT-x CPUID : 6.F.6 / Extended : 6.F CPU Cache : L1 : 2 x 32 / 2 x 32 KB - L2 : 4096 KB Core : Woodcrest (65 nm) / Stepping : B2 Freq : 2985.21 MHz (331.69 * 9) MB Brand : Intel MB Model : S5000XVN NB : Intel 5000X rev 31 SB : Intel 6321ESB rev 09 RAM : 16384 MB FB-DDR2 RAM Speed : 331.7 MHz (1:1) @ N/A Slot 1 : 2048MB (5300) Slot 1 Manufacturer : Kingston Slot 2 : 2048MB (5300) Slot 2 Manufacturer : Qimonda Slot 3 : 2048MB (5300) Slot 3 Manufacturer : Kingston Slot 4 : 2048MB (5300) Slot 4 Manufacturer : Qimonda Slot 5 : 2048MB (5300) Slot 5 Manufacturer : Kingston Slot 6 : 2048MB (5300) Slot 6 Manufacturer : Qimonda As you can clearly see, CPU-Z says I have 16GB of PC2-5300 RAM installed. For some reason in both BIOS and in Windows the maximum usable RAM is 12GB instead of 16GB. I have a dedicated video card connnected, so it can't be stealing RAM to use for the GPU (the S5000XVN does not have any onboard video). I am running Windows Server 2008 R2 x64 as the primary OS - so there should not be a memory limitation imposed on me from the OS. Has anyone experienced this before? Any ideas on how I can actually use all 16GB of RAM? I plan on using this machine to use for a Hyper-V server and have x2 Quad Core Xeon CPUs on the way as I write this post.

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • Windows 7 DHCP Default Gateway not Overridden by manual Default Gateway

    - by dgwilson
    We have recently installed Windows 7 for student computers. All student computers must be routed through our content filter which is located at 192.168.0.63. This was done in WinXP by adding a Default Gateway in the network adapter settings TCP/IP Properties Advanced Default Gateway. All teacher computers are routed through the DHCP assigned Default Gateway of 192.168.0.1. In WinXP the dhcp default gateway was correctly overridden by this manual setting. In Win7 it appears that the dhcp default gateway is retained and the manual one is added to the list so that there are two with the dhcp one having the primary metric. I have tried several ways to remove the dhcp default gateway such as, running the "route delete 0.0.0.0 192.168.0.1" command. Doing this from an administrator command prompt works but it just resets upon reboot. I've tried adding this command to the registry's Run section but it seems to run as a non-administrator and therefore will not complete successfully. Is there any way to prevent this and force the manual default gateway to override the dhcp one? Or to remove the dhcp assigned one automatically on boot/login? HELP! We CANNOT allow student computers to connect to the internet without going through the content filter.

    Read the article

  • Windows 7 DHCP Default Gateway not Overridden by manual Default Gateway

    - by dgwilson
    We have recently installed Windows 7 for student computers. All student computers must be routed through our content filter which is located at 192.168.0.63. This was done in WinXP by adding a Default Gateway in the network adapter settings TCP/IP Properties Advanced Default Gateway. All teacher computers are routed through the DHCP assigned Default Gateway of 192.168.0.1. In WinXP the dhcp default gateway was correctly overridden by this manual setting. In Win7 it appears that the dhcp default gateway is retained and the manual one is added to the list so that there are two with the dhcp one having the primary metric. I have tried several ways to remove the dhcp default gateway such as, running the "route delete 0.0.0.0 192.168.0.1" command. Doing this from an administrator command prompt works but it just resets upon reboot. I've tried adding this command to the registry's Run section but it seems to run as a non-administrator and therefore will not complete successfully. Is there any way to prevent this and force the manual default gateway to override the dhcp one? Or to remove the dhcp assigned one automatically on boot/login? HELP! We CANNOT allow student computers to connect to the internet without going through the content filter.

    Read the article

  • Windows Server 2008 32 bit & windows 7 professional SP1

    - by Harry
    I'm testing my new Windows Server 2008 32 bit edition (2 servers) as a server and Windows 7 professional 32 bit as a client. Let say one is a primary domain controller (PDC) and the other is a backup domain controller (BDC) like the old time to ease. Every setup were done in the PDC and just replicate to BDC. Didn't setup anything, just install the server with AD, DNS, DHCP, that's all. Then I use my windows 7 pro 32 bit to join the domain. It worked. After that I tried to change the password of a the user (not administrator) but it always failed said it didn't meet the password complexity setup while in fact there's no setup at all either in account policy, default domain policy or even local policy. Tried to disable the password complexity in the default domain policy instead of didn't set all then test again but still failed. Browse and found suggestion to setup the minimum and maximum password age to 0 but it also failed. Tried to restart the server and the client then change password, still failed with the same error, didn't meet password complexity setup. Tried to see in the rsop.msc but didn't found anything. In fact, if I see the setup in another system with windows server 2003 and windows xp, using rsop.msc I can see there's setup for computer configuration windows settings security settings account policies password policy. I also have a windows 7 pro 32 bit in a windows server 2003 32 bit environment but unable to find the same setting using rsop but this windows 7 works fine. anyone can give suggestion what's the problem and what to do so I can change my windows 7 pro laptop password in a windows server 2008 environment? another thing, is it the right assumption that we can see all the policies setting in windows 7 whether it's in a windows server 2003 or 2008 environment? thanks.

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • Ubuntu Server: Networking fails with MODPROBE option in /etc/network/interfaces ... ??

    - by neezer
    For some reason (which I haven't been able to determine yet), yesterday morning the networking service on our web server (running Ubuntu 8.04.2 LTS -- hardy) wouldn't start, and our website went down. I noticed the following error message when trying to restart it: * Reconfiguring network interfaces... /etc/network/interfaces:6: option with empty value ifup: couldn't read interfaces file "/etc/network/interfaces" ...fail! Line 6 in the /etc/network/interfaces file concerned a MODPROBE command, which (I believe) loaded in the ip_conntrack_ftp module so that I could use PASV on my FTP server (vsftpd): (breaking modprobe commands commented out below) # Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or # /usr/share/doc/ifupdown/examples for more information. # The loopback network interface auto lo iface lo inet loopback #MODPROBE=/sbin/modprobe #$MODPROBE ip_conntrack_ftp pre-up iptables-restore < /etc/iptables.up.rules # The primary network interface # Uncomment this and configure after the system has booted for the first time auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask 255.255.255.0 gateway xxx.xxx.xxx.1 dns-nameservers xxx.xxx.xxx.4 xxx.xxx.xxx.5 I've verified that there is a file in /sbin called modprobe. Like I said earlier, this setup had been working flawlessly until yesterday morning (though my bosses say that the site actually went down the previous night at 11 PM EST). Can anyone shed some light on (A) why this broke, and (B) how can I re-enable the ip_conntrack_ftp module?

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    I have a VMware ESXi 4 server and 2 storage servers (mounted via nfs). Between the storage servers (Fedora 14) is a drbd cluster (dual primary) and ocfs2 filesystem; also every server has a local partition with an ext4 filesystem, both are mounted via nfs on the esxi server. When I tried to copy a virtual machine (naturally it was powered off) from the ext4 partition to the ocfs2 partition, the vmdk total file size is different, but the md5sum is the same. On the ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 On the ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When I power on the virtual machine from the ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after the windows logo. From the ext4 partition the virtual machine workes. I tested with linux (created and installed on ext4 partition and then copied to the ocfs2) and the same problem appears. When I create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and I have the same problem. Any suggestions?

    Read the article

  • Exchange 2010 Mail Enabled Public Folder Unable to Recieve External (anon) e-mail.

    - by Alex
    Hello All, I am having issues with my "Public Folders" mail enabled folders receiving e-mails from external senders. The folder is setup with three Accepted Domains (names changed for privacy reasons): 1 - domain1.com (primary & Authoritative) 2 - domain2.com (Authoritative) 3 - domain3.com (Authoritative) When someone attempts to send an e-mail to [email protected] from inside the organization, the e-mail is received and placed in the appropriate folder. However, when someone tries to send an e-mail from outside the organization (such as a gmail account), the following error message is received: "Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 Recipient address rejected: User unknown (state 14)." When I try to send an e-mail to the same folder, using the same e-mail address above ([email protected]), but with domain2.com instead of domain3.com, it works as intended (both internal & external). I have checked, double checked, and triple checked my DNS settings comparing those from domain2 & domain3 with them both appearing identical. I have tried recreating the folders in question with the same results. I have also ran Get-PublicFolderClientPermission "\Web Programs\folder" with the following results for user anonymous: RunspaceId : 5ff99653-a8c3-4619-8eeb-abc723dc908b Identity : \Web Programs\folder User : Anonymous AccessRights : {CreateItems} Domain2.com & Domain3.com are duplicates of each other, but only domain2.com works as intended. All other exchange functions are functioning properly. If anyone out there has any suggestions, I would love to hear them. I've just hit a brick wall. Thanks for all your help in advance! --Alex

    Read the article

  • Windows 7 Multi-NIC woes

    - by Eric
    I have Comcast business Internet here. It gives me 5 static IPs. Most of the machines in my house connect to a router like every other household. It has a 192.168.117.x subnet, DHCP Server, etc. and all is well. However, I have a second machine on MY desk that has a life Internet IP. Up until yesterday, this machine was running XP Pro. The primary NIC was manually set to 192.168.117.241 with no gateway, and the secondary NIC was manually set to 173.x.x.171 with a gateway of 173.x.x.174. This worked just fine for years. Yesterday I replaced that XP machine with a brand new Windows 7 x64 box. Again, I configured it the same way. The onboard NIC was given a static 192.168.117.x address with no gateway, and the secondary NIC was given a live Internet IP address with the proper router, etc. 2 Problems. First is that the internal network (192.168.117.x) is listed as a public network because there's no gateway, so that means no homegroup, no file sharing, none of that. And I can't change it from what I'm reading... The second is that the machine reports the "router" ip address as it's address, and not the address that it's supposed to. I'm ready to tear my hair out over this. Any ideas?

    Read the article

  • Freebsd Secondary Group not allowing folder deletion

    - by Jarrod Juleff
    TLDR: I have a user that is a member to a group as a secondary group. This user can delete files with 664 perms as a secondary user, but not directories with perms of 775. Details: I have a user. Lets call him ftpuser. I use him to upload and download files to my devbox. The user's primary group is "ftp" and is also in the group "www" as a secondary group. My web server runs as user www and group www, and I have proftpd (running as www and www) configured to drop all files into the needed directories as www and www (for file ownership) and perms 664 on files and 775 on directories. My problem is (tried with 2 ftp clients) the ftp client can delete the files, but not the folders. Filezilla returns 550 permission denied. The owner only can delete flag is not set, and I've triple checked the permissions and they are indeed 775. Its driving me nuts to have to log into my server to manually delete folders every time. Some of the folders and files are created by 1 of my php scripts, but the permissions are getting set properly when I check the files' properties. Directory and file creation works phenomenal. Can delete files, just not directories. Freebsd 9.0 Running in VirtualBox (32bit all the way around) Proftpd (running as www and www) as ftp server (tried using both dreamweaver and filezilla as the clients) Basic amp setup (apache,mysql,and php).

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • New router messed up server 2003 setup...

    - by Aceth
    Hey, We were sent a new 2wire router today configured it as best we can to match the old bt voyager. We've also got X static IP's. We've manage to get our webserver on one of the new IP's public facing. then we use a hardware firewall which is in a DMZ again with a different static IP. This firewall then is our gateway for our internal LAN. with a few servers etc. The problem we're having is only our PDC (primary Domain controller which has exchange 2003 on) can't ping externally even an external IP. We've connected laptops to the 2wire router and obtain a private ip 192.168.1.X and it works fine can ping etc. our other servers with an internal ip behind the firewall can ping out fine. We've connected to the firewalls logging console and the pings from the server are allowed through so its fine there. The server in question is a Windows server 2003 R2 Enterprise SP2 + Exchange 2003 Server doesn't have firewall turned on. it has static private IP .. gateway is pointing to the right one External Static IP is routing fine inwards We've ran out of ideas .. help??

    Read the article

  • Zabbix Proxy not collecting data

    - by syntaxcollector
    Hi All I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • W2K INACCESSIBLE_BOOT_DEVICE, with System Commander

    - by Gary Kephart
    I have a system that was originally had Win NT. I added System Commander (SC7) and then added W2K. The relevant partitions are: 0 - Primary - MultiFAT (Has Win NT, mapped to C:) 1 - Extended - with many logical partitions: 1.1 NTFS which has W2K and is mapped to D: 1.2 other logical partitions which are irrelevant to this D: was getting full. It needed room for virus definitions and Windows upgrades. In the past, I had simple used SC7 to resize D: without problems. So I did it again this time. However, upon finishing, I got the message "Unable to create partition". It also marked the partition as unformatted. I checked that the files on the disk were still there using SC7's Partition Explorer, and they were there. I continued and the system managed to boot up fine anyways. Then I rebooted the system again. This time, I got a message saying "INACCESSIBLE_BOOT_DEVICE". I went back in to SC7 and to Partition Commander, and it was still saying that the partition was unformatted but the Partition Explorer still showed the files on the system. I finally decided to resize the partition again, figuring that this would force a rewrite of the partition information. That seemed to work, until I had to reboot again. Now I can't see the files using Partition Explorer, and the Resize button is now disabled. What now?

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >