Search Results

Search found 9980 results on 400 pages for 'kernel extension'.

Page 364/400 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Broadcom HT1100 SATA controller not working properly with 1TB drives

    - by Jeff C
    I've been using RHEL distro's for several years and always managed to find the answers until now. I know this is more of a hardware issue, but I've been working on this for over a week and trust Linux and the IT community to help more then HP. I have CentOS 6.3 installed on an HP ProLiant DL145 G3 server with the BroadCom HT1100 IO controller and ServerWorks SATA Controller MMIO BIOS v3.0.0015.6 Firmware. This controller does not support large drives fully. Here's what I've tried and the results; Stock setup - Freezes on the ServerWorks POST screen. Can't even enter CMOS without disconnecting the drives. If I simply disconnect the SATA cables before it gets to the ServerWorks screen and reconnect afterwards I can boot from a CD, USB, PXE fine. However fiddling with cables at ever boot isn't practical. If I enter the BIOS config I can set it to not try booting the drives but leave the controller enabled. This lets me boot normally but the drives are not visible in the OS (live CDs or USB installed). I used method #2 to install and update CentOS. I have the /boot partition on a USB drive (everything else is on the SATA drives in software RAID1) hoping that would work around the issue but I get this Kernel panic - not syncing:Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-279.9.1.el6.x86_6 #1 Call Trace: [<ffffffff814fd6ba>] ? panic+0xa0/0x168 [<ffffffff81070c22>] ? do_exit+0x862/0x870 [<ffffffff8117cdb5>] ? fput+0x25/0x30 [<ffffffff81070c88>] ? do_group_exit+0x58/0xd0 [<ffffffff81070d17>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b panic occured, switching back to text console I'm sure it should be possible to talk to the drives without the BIOS boot check since the BIOS doesn't see them in method #2 either, their disconnected when it checks, but Linux sees them fine. If anyone could help figure out how I would greatly appreciate it! The other possible option I've come across is a complex firmware update. Tyan has a few boards on their website with the HT1100 and a ServerWorks v3.0.0015.7 update which says "adds support for TB drives" in the release notes. If someone could help me get the Tyan SATA firmware into the HP ROM file so I could just reflash that would also be very much appreciated. Thanks for any help you guys can offer!

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • How to configure ASP.NET MVC 3 on IIS 6 (Windows 2003 R2)

    - by Nedcode
    I am getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Background: I have build and deployed an ASP.NET MVC 2 applcation a long time ago. Later I upgraded it to MVC 3 and it is still working with not configuration changes. Setting it up on a windows 2003 R2 (Standard) initialy was a pain, but after a couple of days(yes, days) struggling it started working. Now I have to do the same with the same application on a different server (2003 R2 Standard again) on a different network. .Net 4 is installed and allowed ASP.NET MVC 3 is also installed By default IIS is set to use .net 4 I verify aspnet_isapi.dll used in application extension are from version 4.0.30319 .NET asemblies folder. I also added the wildcard mapping to aspnet_isapi.dll and unchecked verify file exists. Under Directory Security in Authentication Methods I have disabled anonymos access and enabled Integrated Windows authentication(same as the one on the server that it works) I have copied the same web.config with the <authentication mode="Windows" /> <authorization> <deny users="?" /> </authorization> I have set Read & Execute, List Folder Contents, and Read for the Networkservice account(under which the app pool is working). Also I have set the same for Network account, IIS_WPG, ASPNET and IUSR_MAchineName. I do not have an EnableExte??nsionlessUrls but even if I create it and set it to true or false it does not help. I also tried http://haacked.com/archive/2010/12/22/asp-net-mvc-3-extensionless-urls-on-iis-6.aspx and it did not help. But I kept getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Web Platform installer was then used to re-install and possibly update .net, asp.net etc. I then noticed IIS was reset to default. So I added the wildcard mapping again. No, luck still 403. I exported configuration files from the working server setup and created new default app pool and new default website using those configurations. Still I get 403 Directory Listing Denied for the / and 404 for any action I try.

    Read the article

  • How did what appears to be a virus get on my computer? (explanation of situation enclosed)

    - by Massimo
    My system is Windows XP SP3, updated with the latest patches. The PC is connected to a Cisco 877 ADSL router, which does NAT from the internal network to its single static public IP address. There are no forwarded ports, and the router's management console can only be accessed from the inside. I was doing two things: working on a remote office machine via VPN and browsing some web pages on the Cisco web site. The remote network is absolutely safe (it's a lab network, four virtual servers, no publicly accessible services and no users at all; also, none of what I'm going to describe ever happened there). The Cisco web site... well, I suppose is quite safe, too. Suddenly, something happened. Strange popups appears anywhere; programs claiming they're "antimalware", "antispyware" et so on begins autoinstalling; fake Windows Update and Security Center icons pop up in the system tray. svchost.exe began crashing repeatedly. Then, finally, after some minutes of this... BSOD. And, upon rebooting, BSOD again. Even in safe mode. Ok, that was obviously some virus/trojan/whatever. I had to install a new copy of Windows on another partition to clean things up. I found strange executables, services and DLLs almost anywhere. Amongst the other things, user32.dll and ndis.sys had been replaced. A fake software called "Antimalware Doctor" had been installed. There were services with completely random names or even GUIDs (!), and also ones called "IpSect" and "Darkness". There were executable files without an .exe extension. There were even two boot-class drivers, which I'm quite sure are the ones that finally caused the system to crash. A true massacre. Ok, now the questions: What the hell was that?!? It was something more than a simple virus! How did it manage to attack my computer, as I am behind a firewall and was not doing anything even only potentially harmful on the web at the time?

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Second network card configuration not working.

    - by Sebas
    I have 4 servers running Centos 5. All of them have two ethernet network cards. I have configured 192.168.1.x IP addresses on their eth0 card. They are all connected to the same switch using their eth0 card and they are all working. I have configured 10.72.11.x IP addresses on their eth1 card.They are all connected to the same switch - a different one from the switch used with eth0 card - using their eth1 card and they are NOT all working. Their configuration files is like: DEVICE=eth1 BOOTPROTO=static IPADDR=10.72.11.236 BROADCAST=10.72.11.191 NETMASK=255.255.255.192 NETWORK=10.72.11.128 HWADDR=84:2B:2B:55:4B:98 IPV6INIT=yes IPV6_AUTOCONF=yes ONBOOT=yes The interfase is starting and configured as I need. [root@sql1 network-scripts]# ifconfig eth0 Link encap:Ethernet HWaddr 84:2B:2B:55:4B:97 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::862b:2bff:fe55:4b97/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2981 errors:0 dropped:0 overruns:0 frame:0 TX packets:319 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:386809 (377.7 KiB) TX bytes:66134 (64.5 KiB) Interrupt:36 Memory:da000000-da012800 eth1 Link encap:Ethernet HWaddr 84:2B:2B:55:4B:98 inet addr:10.72.11.236 Bcast:10.72.11.191 Mask:255.255.255.192 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:48 Memory:dc000000-dc012800 I also added a route-eth1 file that looks like: 10.0.0.0/8 via 10.72.11.254 Routing looks fine to me: [root@sql1 network-scripts]# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.72.11.192 0.0.0.0 255.255.255.192 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.72.11.254 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 But I cannot ping one server from the other. [root@sql1 network-scripts]# ping 10.72.11.235 PING 10.72.11.235 (10.72.11.235) 56(84) bytes of data. From 10.72.11.236 icmp_seq=1 Destination Host Unreachable From 10.72.11.236 icmp_seq=2 Destination Host Unreachable From 10.72.11.236 icmp_seq=3 Destination Host Unreachable From 10.72.11.236 icmp_seq=4 Destination Host Unreachable From 10.72.11.236 icmp_seq=5 Destination Host Unreachable From 10.72.11.236 icmp_seq=6 Destination Host Unreachable ^C --- 10.72.11.235 ping statistics --- 7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6033ms , pipe 3 What am I doing wrong?

    Read the article

  • Error sending email to alias with Postfix

    - by Burning the Codeigniter
    I'm on Ubuntu 11.04 64bit. I'm trying to set up Postfix on my VPS, which has been configured but when I send an email to an alias e.g. [email protected] it will send it to [email protected]. Now when I sent the email from my GMail account, I got this returned: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] (state 14). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=R1WtjVRWywfkWCR2g4QKbSjAfUaU9DAAMKbg9UAWqvs=; b=FiSfdhEaV4pEq/76ENlH4tvOgm35Ow3ulRg06kDYrIQTaDf3eOEgfSEgH25PjZuAj/ 7Hg1CL++o6Rt/tl80ZiR2AWekhA0zIn2JkqE7KssMG7WbBmMmbf8V9KDo2jOw+mZv+C/ KDKsQ65AudBZ/NYLDDpTT7MkKf8DzqeGCKj9MAct6sHDoC0wCciXYxNfTf+MKxrZvRHQ oICTkH5LOugKW9wEjPF2AoO8X0qgYmTLYeSUtXxu46VeNKRBGmdRkkpPOoJlQN9ank7i SW6kU6M9bk2bYOgKwV/YPsaantmYlu1XdmYx+kWeJkNJAyYOfXfZZ8WUJhbbFFD9bZCi m/hw== MIME-Version: 1.0 Received: by 10.101.3.5 with SMTP id f5mr783908ani.86.1334247306547; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Received: by 10.236.73.136 with HTTP; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Date: Thu, 12 Apr 2012 17:15:06 +0100 Message-ID: <CAN+9S2aB=xjiDxVZx3qYZoBMFD4XuadUyR_3OYWaxw1ecrZmOQ@mail.gmail.com> Subject: Test Email From: My Name <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=001636c597eabfd21504bd7da8fd Now that I don't understand why it isn't working, my aliases are set up correctly - I see no error messages being produced in /var/log/mail.log or any other mail logs, which makes it harder for me to debug. This is my postfix configuration (postconf -n): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $mydomain, $myhostname, localhost, localhost.localdomain, localhost mydomain = domain.com myhostname = localhost mynetworks = 192.168.1.0/24 127.0.0.0/8 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Does anyone know how to solve this specific issue?

    Read the article

  • How to create NTFS partition in Linux to install Windows 7 from USB?

    - by Michal Stefanow
    I messed up with my computer and need help. Generally: install Windows 7 from USB. Problem: "setup was unable to create a new system partition" When first attempt to install Windows 7 failed I tried Linux live USB, installed distro to HDD, and erased all the existing partitions. Current state (fdisk -l): [writing from other computer so no copy and paste] /dev/sda1 305GB Linux /dev/sda2 7GB Extended /dev/sda5 7GB Linux Swam / Solaris To create a new, NTFS partition: fdisk /dev/sda n (for new) p (for primary) 3 (for partintion number) "No free sectors available" All the HDD was formatted couple of minutes before so there is a lot of free space but how to resize a parition? I cannot find an option for resizing in man fdisk. Some people say I should use gparted but my distro doesn't not contain this package. And my distro doesn't support wireless drivers so I have serious problems with downloading stuff. I tried also using cfdisk but any command results in: "cfdisk bad primary partition 1 partition ends in the final partial cylinder" I tried also removing partition 1 and then creating a new one (so there is no "no free sectors"). I'm receiving a warning: "Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot." After restating: "grub rescue, no known filesystem" It may indicate that some changes have been made BUT when running Windows 7 installed some another error: "Windows cannot be installed to Disk 0 Partition 1" More detailed: "Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS." So formatting drive using Windows 7 installer BUT this time yet another error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information" Apparently I cannot access logs (how?) and I am back to drawing board with my live USB (this time showing partition as HPFS/NTFS). Any suggestions how to install Windows 7? Should I reinstall Linux to HDD, erase existing partitions once again, and use Parted rather than gparted (parted is included in the distro). Or maybe should I create another bootable USB such as PartedMagic to painlessly create partitions? I just want to install Windows 7 from USB, my laptop is semi-operational and I am ready to receive some help regarding fdisk and creating NTFS partitions. UPDATE: I did as suggested (removed all the partitions) and tried to install in unallocated space. Tried to create a new partition and format it. Same error: "setup was unable to create a new system partition" Came to the conclusion it may have something to do with TrueCrypt I have recently installed. Right now trying to FIX MBR (as I haven't got possibility to create rescue disc without optical drive)

    Read the article

  • Primary/secondary ethernet interfaces via NetworkManager in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local... EDIT: Here's my routing table: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 172.16.151.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 172.16.30.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.0.1 0.0.0.0 UG 0 0 0 eth1 I want the 172.16.30.2 network to be the primary one and the 10.1.0.0 network to be the secondary one. EDIT2: My nameservers are also incorrect. It seems like Ubuntu is bringing the networks up in order, eth0, then 1, then 2, and the DHCP information from eth1 is overriding eth0, and eth2 is overriding eth1. How can I reverse this so the DHCP information from eth0 is the "master"? EDIT3: This seems to be an issue with Gnome's NetworkManager.

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • iTunes and Hulu Playback Choppy and Slow?

    - by Bart Silverstrim
    Specs: Windows XP, latest updates 1.7 ghz Pentium 4 1 gig ram DirectX 9.0c NVIDIA GeForce FX 5200 with 256 meg RAM OpenGL 2.1 The story: Okay, I had an older system laying around that I figured I would try turning into a mini-media system to connect to our TV. I put together a lot of older parts, got it into working order, etc. and hooked it up and voila'...slower, but usable system that displayed to the TV. It could run some things decently. I put in iTunes, it played video okay. Not great, but okay. Played Hulu and since we have a 1Mb download rate, the minimum for their site, there were some choppy moments when watching their shows, but I found that (sadly) changing resolution to 800x600 seemed to help with the issue when running full screen. I downloaded the application called Boxee and installed it. It wouldn't run; apparently the video card in the system supported OpenGL 1.2, and needed at least 1.4. I bought a cheap card, the 5200, with four times the memory in it and support for OpenGL 2.1. Installed, everything seemed fine. iTunes seemed to run fine, the video driver (PNY video card) came with OpenGL 2.1, and Boxee finally ran. I then upgraded to the latest drivers for the video card and ran the DirectX updater from MS. After that, the OpenGL Extension Viewer wouldn't run. It just stayed as an icon in the task bar. Also, any and all videos in iTunes stuttered and went out of sync horribly. Unwatchable. I tried watching Hulu video in Boxee, and it displayed video like it was a series of stills in a very bad powerpoint. Playing straightforward audio-only came through fine, no stutters no hiccups. I tried system restore to roll back updates to pre-directX updates (I thought that seemed to be the time that triggered the weird behavior), no joy. I tried uninstalling and reinstalling the video drivers. I installed updated audio drivers (ensoniq audiopci), nothing helped. I finally wiped the drive last night and tried reinstalling everything and restoring my iTunes content via an import from a backup. Fresh install, no updater on the video card or directx. the problem was still there although I haven't tested Hulu, the iTunes player is still stuttering like crazy if I play video, fine if I play audio. I know the processor isn't high in heft, but with one gig of RAM and the fact that it seemed to do okay before I thought that the problem must be software related. Has anyone else run into this sort of issue and have a solution other than "buy a new computer"? What specs seem to work with video at the low end for you? Right now the system is of little use other than keeping my music library and iTunes apps synced with my iPod.

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • installed mongo using brew but stuck at prompt

    - by user50946
    I have installed mongo using brew on my mac. When I give mongo command I see this MongoDB shell version: 2.4.6 connecting to: test but it stays there and never give me command prompt back anyone else noticed something like this I have reinstalled with no luck. The issue is persistent thanks Logs ***** SERVER RESTARTED ***** Fri Oct 18 08:11:48.360 [initandlisten] MongoDB starting : pid=2081 port=27017 dbpath=/usr/local/var/mongodb 64-bit host=Asims-MacBook-Air.local Fri Oct 18 08:11:48.360 [initandlisten] db version v2.4.6 Fri Oct 18 08:11:48.360 [initandlisten] git version: nogitversion Fri Oct 18 08:11:48.360 [initandlisten] build info: Darwin minimountain.local 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 Fri Oct 18 08:11:48.360 [initandlisten] allocator: tcmalloc Fri Oct 18 08:11:48.360 [initandlisten] options: { bind_ip: "127.0.0.1", config: "/usr/local/etc/mongod.conf", dbpath: "/usr/local/var/mongodb", logappend: "true", logpath: "/usr/local/var/log/mongodb/mongo.log" } Fri Oct 18 08:11:48.361 [initandlisten] journal dir=/usr/local/var/mongodb/journal Fri Oct 18 08:11:48.361 [initandlisten] recover : no journal files present, no recovery needed Fri Oct 18 08:11:48.398 [websvr] admin web console waiting for connections on port 28017 Fri Oct 18 08:11:48.398 [initandlisten] waiting for connections on port 27017 Fri Oct 18 08:12:03.279 [signalProcessingThread] got signal 1 (Hangup: 1), will terminate after current cmd ends Fri Oct 18 08:12:03.279 [signalProcessingThread] now exiting Fri Oct 18 08:12:03.279 dbexit: Fri Oct 18 08:12:03.279 [signalProcessingThread] shutdown: going to close listening sockets... Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 9 Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 10 Fri Oct 18 08:12:03.280 [signalProcessingThread] closing listening socket: 11 Fri Oct 18 08:12:03.280 [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to flush diaglog... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to close sockets... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: waiting for fs preallocator... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: lock for final commit... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: final commit... Fri Oct 18 08:12:03.282 [signalProcessingThread] shutdown: closing all files... Fri Oct 18 08:12:03.282 [signalProcessingThread] closeAllFiles() finished

    Read the article

  • Windows 7 losing one of my displays after restart

    - by j_kubik
    I have an Intel DZ68BC motherboard with Intel HD graphics card using two monitors (on DVI and on HDMI » VGA). My friend asked me to test if his NVIDIA graphics card works well on my computer (at his it was doing some trouble), so I inserted it in my computer, installed the NVIDIA driver and it worked quite well. Then I removed it, uninstalled everything NVIDIA-related I could find and switched monitors back to my Intel card. Since then after every system start/restart, the system sees only monitor on HDMI » VGA connector, completely ignoring the DVI monitor. I noticed that installing the Intel video drivers causes the system to recognize the second monitor if I don't immediately reboot. After a reboot, the system recognizes only the HDMI » VGA monitor. I also tried starting in safe-mode and using DriveSweeper to remove the remains of NVIDIA drivers. While it seems that some drivers were removed, the situation didn't change. Now I am out of ideas and I really wouldn't like to reinstall the system (again...). I also tried restoring the system to the state before this whole story, but it also didn't change anything. EDIT: I am still trying to troubleshoot this problem. The only point that I could start was driver re-instalation. I traced down the part that restores right settings to a call: C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\x64\Drv64.exe -driverinf "C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\Graphics\igdlh64.inf" -flags 20 -keypath "Software\Intel\Difx64" This call fixes my displays, and as workaround, I will add it for now to my autorun. I am still looking for better solution anyway... EDIT2: Using DriverView i made a list of currently used drivers both before and after fixing my display using above command. Then i compared logs: No drivers were removed by fixing command. Drivers added by fixing command: MS Remote Access serial network driver (asyncmac.sys) security processor (spsys.sys) Drivers that changed base address (indicates driver-reload?) Canonical Display Driver (cdd.dll) Intel Graphics Kernel Mode Driver (igdkmd64.sys) Monitor Driver (monitor.sys) Added drivers seem rather unrelated to the problem to me, reloaded drivers are just a cnsequence of installing new driver file so there is not much to go here... I really cannot make heads or tails out of it...

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Remote access to internal machine (ssh port-forwarding)

    - by MacUsers
    I have a server (serv05) at work with a public ip, hosting two KVM guests - vtest1 & vtest2 - in two different private network - 192.168.122.0 & 192.168.100.0 - respectively, this way: [root@serv05 ~]# ip -o addr show | grep -w inet 1: lo inet 127.0.0.1/8 scope host lo 2: eth0 inet xxx.xxx.xx.197/24 brd xxx.xxx.xx.255 scope global eth0 4: virbr1 inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1 6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 # [root@serv05 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 xxx.xxx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 xxx.xxx.xx.62 0.0.0.0 UG 0 0 0 eth0 I've also setup IP FORWARDing and Masquerading this way: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface virbr0 -j ACCEPT All works up to this point. If I want to remote access vtest1 (or vtest2) first I ssh to serv05 and then from there ssh to vtest1. Is there a way to setup a port forwarding so that vtest1 can be accessed directly from the outside world? This is what I probably need to setup: external_ip (tcp port 4444) -> DNAT -> 192.168.122.50 (tcp port 22) I know it's easily do'able using a SOHO router but can't figure out how can I do that on a Linux box. Any help form you guys?? Cheers!! Update: 1 Now I've made ssh to listen to both of the ports: [root@serv05 ssh]# netstat -tulpn | grep ssh tcp 0 0 xxx.xxx.xx.197:22 0.0.0.0:* LISTEN 5092/sshd tcp 0 0 xxx.xxx.xx.197:4444 0.0.0.0:* LISTEN 5092/sshd and port 4444 is allowed in the iptables rules: [root@serv05 sysconfig]# grep 4444 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 4444 -j DNAT --to-destination 192.168.122.50:22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A FORWARD -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT But I'm getting connection refused: maci:~ santa$ telnet serv05 4444 Trying xxx.xxx.xx.197... telnet: connect to address xxx.xxx.xx.197: Connection refused telnet: Unable to connect to remote host Any idea what's I'm still missing? Cheers!!

    Read the article

  • What else can I do to secure my Linux server?

    - by eric01
    I want to put a web application on my Linux server: I will first explain to you what the web app will do and then I will tell you what I did so far to secure my brand new Linux system. The app will be a classified ads website (like gumtree.co.uk) where users can sell their items, upload images, send to and receive emails from the admin. It will use SSL for some pages. I will need SSH. So far, what I did to secure my stock Ubuntu (latest version) is the following: NOTE: I probably did some things that will prevent the application from doing all its tasks, so please let me know of that. My machine's sole purpose will be hosting the website. (I put numbers as bullet points so you can refer to them more easily) 1) Firewall I installed Uncomplicated Firewall. Deny IN & OUT by default Rules: Allow IN & OUT: HTTP, IMAP, POP3, SMTP, SSH, UDP port 53 (DNS), UDP port 123 (SNTP), SSL, port 443 (the ones I didn't allow were FTP, NFS, Samba, VNC, CUPS) When I install MySQL & Apache, I will open up Port 3306 IN & OUT. 2) Secure the partition in /etc/fstab, I added the following line at the end: tmpfs /dev/shm tmpfs defaults,rw 0 0 Then in console: mount -o remount /dev/shm 3) Secure the kernel In the file /etc/sysctl.conf, there are a few different filters to uncomment. I didn't know which one was relevant to web app hosting. Which one should I activate? They are the following: A) Turn on Source Address Verification in all interfaces to prevent spoofing attacks B) Uncomment the next line to enable packet forwarding for IPv4 C) Uncomment the next line to enable packet forwarding for IPv6 D) Do no accept ICMP redirects (we are not a router) E) Accept ICMP redirects only for gateways listed in our default gateway list F) Do not send ICMP redirects G) Do not accept IP source route packets (we are not a router) H) Log Martian Packets 4) Configure the passwd file Replace "sh" by "false" for all accounts except user account and root. I also did it for the account called sshd. I am not sure whether it will prevent SSH connection (which I want to use) or if it's something else. 5) Configure the shadow file In the console: passwd -l to lock all accounts except user account. 6) Install rkhunter and chkrootkit 7) Install Bum Disabled those services: "High performance mail server", "unreadable (kerneloops)","unreadable (speech-dispatcher)","Restores DNS" (should this one stay on?) 8) Install Apparmor_profiles 9) Install clamav & freshclam (antivirus and update) What did I do wrong and what should I do more to secure this Linux machine? Thanks a lot in advance

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >