Search Results

Search found 8649 results on 346 pages for '15 puzzle'.

Page 146/346 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • second ip address on the same interface but on a different subnet

    - by fptstl
    Is it possible in CentOS 5.7 64bit to have a second IP address on one interface (eg. eth0) - alias interface configuration - in a different subnet? Here is the original config for eth0 more etc/sysconfig/network-scripts/ifcfg-eth0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.91.255 HWADDR=00:1D:09:FE:DA:04 IPADDR=192.168.91.250 NETMASK=255.255.255.0 NETWORK=192.168.91.0 ONBOOT=yes And here is the config for eth0:0 more etc/sysconfig/network-scripts/ifcfg-eth0:0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0:0 BOOTPROTO=static BROADCAST=10.10.191.255 DNS1=10.10.15.161 DNS2=10.10.18.36 GATEWAY=10.10.191.254 HWADDR=00:1D:09:FE:DA:04 IPADDR=10.10.191.210 NETMASK=255.255.255.0 NETWORK=10.39.191.0 ONPARENT=yes How would the resolv.conf file should change since there are two different gateways? Any other change needed?

    Read the article

  • Why would you ever set MaxKeepAliveRequests to anything but unlimited?

    - by Jonathon Reinhart
    Apache's KeepAliveTimeout exists to close a keep-alive connection if a new request is not issued within a given period of time. Provided the user does not close his browser/tab, this timeout (usually 5-15 seconds) is what eventually closes most keep-alive connections, and prevents server resources from being wasted by holding on to connections indefinitely. Now the MaxKeepAliveRequests directive puts a limit on the number of HTTP requests that a single TCP connection (left open due to KeepAlive) will serve. Setting this to 0 means an unlimited number of requests are allowed. Why would you ever set this to anything but "unlimited"? Provided a client is still actively making requests, what harm is there in letting them happen on the same keep-alive connection? Once the limit is reached, the requests still come in, just on a new connection. The way I see it, there is no point in ever limiting this. What am I missing?

    Read the article

  • Apache Sending "Content-Length : 0" , How to Fix ?

    - by ServerZilla
    Hi, I am using Apache server and it is sending Content-Length = 0 value which is preventing file-downloads, see - http://www.youtubedroid.com/download2.php?v=%5F3XcMEKNws0&title=Akhila+%2CMumbai+reloaded%2CSuper+dancer+2&hq=0 , here are my .htaccess content : SetEnv no-gzip dont-vary Here are headers sent by the server : HTTP/1.1 200 OK Date: Tue, 15 Dec 2009 06:12:11 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 X-Powered-By: PHP/5.2.11 Content-Description: File Transfer Content-Disposition: attachment; filename="Akhila ,Mumbai reloaded,Super dancer 2.mp3" Content-Transfer-Encoding: binary Expires: 0 Cache-Control: must-revalidate, post-check=0, pre-check=0 Pragma: public X-Sendfile: ./tmp/64eb3b185e38af95c15405ffb0606e76.mp3 Content-Length: 0 Keep-Alive: timeout=5, max=95 Connection: Keep-Alive Content-Type: application/octet-stream Pls. tell how to fix this ?

    Read the article

  • tar incremental backup is backing everything up, every time when used on the Dropbox directory

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

  • Mac Book Pro wakes up in my backpack

    - by J. Pablo Fernández
    This has happened to me twice: I press the power button on my Mac Book Pro, choose sleep, close it, unplug everything, confirm that is off (by pressing my ear to it) and put it in my bag. Some minutes later, the laptop wakes up by itself. Both times I caught it in time. The second time it was so hot I couldn't touch some parts -- it refused to actually wake up, and the screen was blank. Restarting it worked though. Any ideas what might be going on and/or how to prevent this? More details: It's a Mac Book Pro unibody 15" from 2009.

    Read the article

  • How to add nvidia drivers after previous failure with linux mint?

    - by LessThanMe
    Before today, I had perfectly good drivers from nvidia for my linux mint (15) box. I decided to update it because my performance in TF2 is less than stellar, and then things went south. I used synaptic to install nvidia-331 and then rebooted, but when I selected Mint in GRUB I waited...and waited...and waited. Nothing happened, but the display stayed on (a completely black video was being output). So I went into recovery mode from GRUB, went to root access, and apt-get remove --purge nvidia*'d my way out of that mess, and installed nvidia-common. Now my performance in graphic intensive stuff (read: games, blender) sucks, so I've been through the same thing a few times trying to re-install nvidia-current. I just want to get it back how it was. Thanks for any help! Nvidia GTX 560

    Read the article

  • 32bit domU on 64bit dom0

    - by ModuleC
    I'm using 64bit Centos Dom0 with: 2.6.18-164.15.1.el5xen #1 SMP Wed Mar 17 12:04:23 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux recently i migrated some 32bit Centos domUs on this node. As by specs 32bit domUs should work with 64bit dom0. DomUs are paravirtualized, and everything works, except iptables limit. Anyways runing csf on domU will return following messages in dmesg: ip_tables: (C) 2000-2006 Netfilter Core Team Netfilter messages via NETLINK v0.30. ip_conntrack version 2.4 (2080 buckets, 16640 max) - 304 bytes per conntrack ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 Doing lsmod on both dom0 and domU is listing all iptables modules required as loaded. I've found this http://www.mail-archive.com/[email protected]/msg189433.html But didn't find anything for centos on this issue. Am I missing something?

    Read the article

  • Meetings Disappearing from Outlook 2010 Public Calendar

    - by Neil
    We are experiencing a frustrating issue with our Public Calendars in Outlook 2010. Meetings that have been scheduled months in advance are missing, but will then reappear. If user A logs in @ 9:30 and goes to the calendar, certain meetings will be missing. 15 Minutes later, when user B logs in, the meetings are there. It is not tied to the actual user- I have seen this issue occur with the order of logging in reversed. These are with meetings that were posted to the calender months ago, so it should not be an issue of an item being updated. We have not upgraded our Exchange environment (still running on 2003), but this is a new machine, running Windows 7 Professional, on a domain, running office 2010. Are there any quirks or settings that I am missing or not aware of?

    Read the article

  • Mindtouch with fcgid - Fast CGI apache worker thread.

    - by Stephan Kristyn
    Anyone got Dekiwiki / Mindtouch running with fcgid-module? I get 504 and 500 all the time. mod_fcgid: can't apply process slot for /var/www/html/dekiwiki/index.php [Tue Dec 28 06:14:03 2010] [warn] (104)Connection reset by peer: mod_fcgid: read data from fastcgi server error. [Tue Dec 28 06:14:03 2010] [error] [client 92.75.107.53] Premature end of script headers: index.php I'm currently fiddling with SuExec and fast-cgi wrapper directory permissions, because I also employ a chrooted SFTP jail. Sometimes the first line about the process slot does not appear now. I found a solution in german and will work it through now. http://debianforum.de/forum/viewtopic.php?f=8&t=122758&start=15

    Read the article

  • Upgrading my home network to Gigabit Ethernet and Wireless-N turns out slower than before

    - by Raheel Khan
    My home network has three desktops, three laptops and some NAS drives. All desktops and NAS drives support Gigabit LAN and all laptops support Wireless-N. I was running a 100 BaseT switch though. I recently purchased a Gigabit Ethernet Switch and an Wireless-N ADSL Modem-Router. After upgrading, I noticed that the wireless file transfer speeds from laptop-to-NAS and vice versa became terribly slow. Possibly even slower than before the upgrade. The transfer speeds from desktop-to-NAS (wired) have improved though. As an example, copying a 50GB file from laptop-to-NAS was estimated at 15 hours! Is there something I can do to improve this? Also, should I consider buying a dedicated wireless access point for speed rather than using the Wireless modem-router?

    Read the article

  • Dedicated server not responding. Malicious?

    - by user2801881
    My Dedicated server dies for days on end. As soon as i reboot after about 20 seconds it does again. Then it will just work again and be fine for another week or so. Im convinced its malicious. Not sure what results or readings i can give you so just ask and i will do whatever is needed. netstat (top 20 connections) 7 79.142.88.250 8 120.202.249.19 8 159.226.21.62 8 188.168.38.102 8 202.114.6.37 8 222.62.207.70 9 60.191.35.42 10 112.124.46.186 10 116.228.55.184 10 181.133.218.11 10 222.90.111.146 11 183.136.146.110 12 124.127.51.135 12 92.225.24.24 13 221.176.23.242 15 119.10.115.165 16 17 218.6.224.66 21 116.228.55.217 24 114.112.194.19 top CPU usage seems to add up to about 10% Mailq is empty Thanks in advance

    Read the article

  • IRC Server For 50 People?

    - by Ferman
    I've seen this question before and I am sure it will be fine but I want assurance before I do it. I am looking at running an IRC server with 128MB of ram and 500GB of BW. The server will usually handle at least 15 people throughout the whole day but there will be times where there can be at least 50 people in it maybe even more. I might also have a few extra channels on there so probably like at least 5 channels and the same people on the one channel will be in the other channels. Also I am trying to decide on the software to use. I am looking at using NGIRCD but I am not sure does anyone else have any recommendations? http://ngircd.barton.de/ I want to say thank you in advanced for anyone that is helping me. :)

    Read the article

  • Hyper-V core NIC speeds and registry changes

    - by gary
    Good afternoon, On a Dell PE T610 I have Hyper-V core running, with 2 x Broadcom BCM5709C NetXtreme II GigE installed. I have noticed that copying large files 17GB for example, from a network physical server to the Hyper-V host local drive [not vm guest] is very slow in comparison to copying from Physical to Physical servers. Copying a 17GB file physical to Hyper-V host takes 30 minutes Copying a 17GB file physical to physical host takes 15 minutes Can someone tell me exactly what registry nodes I should disable on Hyper-V NICs to improve performance. So far I have gone to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4 D36E972-E325-11CE-BFC1-08002BE10318} and set the following to 0 on both physical NICs: *LSOv1IPv4 *LSOv2IPv6 *TCPUDPChecksumOffloadIPv4 *TCPUDPChecksumOffloadIPv6 Should I also disable *TCPConnectionOffloadIPv4 & *TCPConnectionOffloadIPv6? Many thanks in advance

    Read the article

  • External Display connected to Macbook pro won't sleep

    - by Moiz
    I have Macbook Pro with OSX 10.6.8. It is connected to external display (HP 2311x) using Thunderbolt to HDMI cable. My Macbook display goes to sleep after every 15 minutes of inactivity but the external display won't go to sleep. It does try to go to sleep but wake up immediately. It does that in never ending loop. I have to manually turn off the external display. Is there some settings I need to change to make external display sleep when Macbook display goes to sleep? Any suggestions? Thanks in advance!!

    Read the article

  • What is the best way to register a domain name in China?

    - by Trevor Allred
    What is the best (cost and safety) to register a .cn domain name? We recently received 2 emails from companies (px-vps.org and one other) in China saying that another company was trying to steal/register our .com domain name in china (.cn). They then gave us a list of 15 domains from China to India that we should register through their company. Now they are saying we need to register for a 5 year minimum at $100 per domain. It's starting to sound like a $10,000 scam. We called 101domains and they said it would be $30 for the registration fee and $30 for the law firm in Shanghai. Who should I go through to avoid spending a lot of money and be sure we don't get ripped off in the process?

    Read the article

  • How to proceed setting up a secondary mysql linux slave?

    - by Algorist
    I have a mysql database master and slave in production. I want to setup additional mysql slave. There is around 15 Terabyte of data in the database and there are MYISAM and InnoDB tables in the database. I am thinking of below options: Shutdown master database and copy the mysql data folder to secondary slave. Can Innodb tables be copied like this? Run flush table with read lock, scp the file to new slave and unlock the table and this is possible for myisam tables, can I do the same for innodb tables too? Thanks for looking at the question.

    Read the article

  • Authenticating Linux users against AD without Likewise Open

    - by Graeme Donaldson
    Has anyone got their Linux systems authenticating against Active Directory without using Likewise Open? We are close to implementing Likewise Open, but first we need to rename roughly 70 of 110 Linux servers so that their hostnames are not longer than 15 characters. This is required because Likewise Open actually joins the Linux computer to the domain, and it fails to do so if the hostname is too long due to some legacy NetBIOS naming limitation. Is there a way to authenticate via AD, using only LDAP perhaps? What are the advantages/disadvantages over doing it like that vs just using Likewise?

    Read the article

  • How long should diskpart take?

    - by sam
    I am using diskpart to extend a drive that is actually a VHD. I've already extended the VHD. It's on Windows 2003 and the C drive doesn't contain the swap file and the available space is contiguous. However didn't see the Note about the Resource Kit diskpart for download is not for Windows 2003. So I did the extend using the Windows 2000 version. Not sure if this is the reason but Diskpart is sitting there now for about 15 minutes or so and it's only gotta extend by 10GB. Should it be taking this long? Am I asking for trouble now that I've used a Windows 2000 version of diskpart on a Windows 2003 machine (VM)?

    Read the article

  • Computer disappeared from Workgroup

    - by Moayad Mardini
    I've a LAN that has 8 computers running Windows XP and only one (Computer A) running Windows Server 2003. When computer A boots, it shows up in all computers' Workgroups, but after 15-30 minutes, it disappears from the Workgroup in all computers, including itself. However, files can still be transferred from and to A via full path shortcuts and network drive maps still work. I don't know if this has anything to do with the problem or not, but if that particular computer is turned on while the LAN cable is plugged in, it doesn't connect to the network, but it connects if the cable is plugged in after the computer is run. Any tips why this is happening? Thanks.

    Read the article

  • Do I need to rebuild the array after putting in a new hot spare?

    - by Shade34321
    So my experience with RAID is minimal. So I figured I'd come and ask here. We have a 16 drive RAID system that have 15 drives in RAID 5 with a hot spare left over. Recently one of the drives in the RAID was giving errors so I cloned it over to the hot spare and put a new drive in it's spot. I made the new drive the hot spare as I was told. I was told to rebuild the array after putting in the new drive as a hot spare so I tried and wasn't able to. So my question is do I need to rebuild it and if so why did it tell me I couldn't. Thanks! UPDATE: So I've come back up to work and looked at the RAID and it pulled in the hot spare into the raid and kicked out another drive.

    Read the article

  • Computer won't power on

    - by briskmojo
    Was working fine for over a year, now won't boot. LEDs on the GFX card and one on the MOBO labeled PWR glow when plugged in, but nothing happens when I push power and shorting switch pins does nothing either. If I pop out the CMOS battery and put it back in then try the fans lurch but nothing happens. Shorting the 15 and 16 pins turns the PSU on, and when the 24pin connector is attached to the MOBO it will start up briefly then stop. If I plug in the CPU header it returns to what I described, no power but will lurch after replacing the CMOS battery. Should I be shopping for a new PSU or is there another problem maybe?

    Read the article

  • Getting "GRUB loading ... no such partition"?

    - by shameedp
    I am having a dual os, windows 7 and linux, the c drive have 20 GB, in which 5 GB is allocated for windows 7 (original) and 15 gb for linux since the spacing for windows is very low i used EaseUS partition manager and deleted my linux OS, and merged the unused space into my C drive, now it becomes 20GB, the things, after the reboot, I am getting GRUB loading. Welcome to GRUB! error: no such partition. entering rescue mode. . . Kindly help me guys the problem i am facing is i dont have a DVD drive to resolve it, using recovery mode. Waiting for your reply guys. in ls command i have (hd0) (hd0,msdos8) (hd0,msdos7) (hd0,msdos6) (hd0,msdos5) (hd0,msdos2) (hd0,msdos1)

    Read the article

  • No dual cpu support for VirtualBox with a CPU that doesn't support multicore?

    - by djangofan
    With VMWare it works fine and I can run multiple cores on a VMWare image. With Sun VirtualBox I can only run 1 cpu on a image. Its annoying. Why does Sun Virtualbox not work the same as VMWare in this respect?? My CPU is: XEON 3.00GHz Intel 90nm 2MBCache QUAD CPU x14 Socket 604 mPGA Family 15 Model 4(04) Stepping 3 Revision 05 MMX SSE3 XD SIV.exe tells me: No virtual machine extensions x86 with 64-bit support NO IA64 support MPS but with NO MCP 2 physical processors, 2 cores, 4 logical processors

    Read the article

  • Performance difference between compiled and binary linux distributions/packages

    - by jozko
    I was searching a lot on the internet and couldn't find an exact answer. There are distros like Gentoo (or FreeBSD) which does not come with binaries but only with source code for packages (ports). The majority of distros uses binary backages (debian, etc.). First question: How much speed increase can I expect from compiled package? How much speed increase can I get from real world packages like apache or mysql? i.e. queries per second? Second question: Does binary package means it does not use any CPU instructions that was introduced after first AMD 64bit CPU? With the 32bit packages does it mean that the package will run on 386 and basically does not use most of the modern CPU instructions? Additional info: - I am not talking about desktop, but server environment. - I dont care about compile time - I have more servers, so speed increase more than 15% is worth for using source code packages - Please no flamewars. Thank you very much

    Read the article

  • Tweaks on nvidia-settings only working when the program is opened

    - by Igoru
    I have two monitors. The master one (17") is 1yo, and the secondary (15") is really old, like 4yo. This old screen is having problems displaying colors... They are a little bit darker, what is a problem when I'm viewing pics. I have a GeForce 9800, so I changed some settings inside nvidia-settings, that fit better with this second screen. But those settings just are applied when I first open nvidia-settings. First time I configured this, it worked. I turned off computer, next day turned it on, and screen is dark again. As soon as I open nvidia-settings again, the screens get lighter again! How can I make those settings permanent and loaded at startup?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >