Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 519/1579 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Shrink a Volume Group in LVM / Linux in order to install Windows on the freed space

    - by Stephan Kristyn
    I have a Volume Group with Unused space. This 40Gig should become an entidy in order to install Microsoft windows 7 on it. I do not have extra space on the drive - that is why I want to shrink the VG! LVG berta resides on sda2 and consists of lv_root lv_swap unused_space I want it to become lv_root lv_swap and have a seperate entidy made out of unused_space. Microsoft Windows 7 has to get installed on this entidy. I do not understand why Linux made simple things complicated. I utterly hate LVM and think its absolute bollocks. Useful Sources: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-system-config-lvm.html Edit: I found the answer. The necessary steps depict how complicated LVM really is. In my opinion it is best to avoiding LVM until pvresize matures as promised in its man pages. Answer: http://fedorasolved.org/Members/zcat/shrink-lvm-for-new-partition If you run into problems when you want to remove lvswap even if in resuce mode, then try swapoff /dev/vg_1/lv_swap lvchange -an /dev/vg_1/lv_swap

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Can't create add a SQL Server user: The login already has an account under a different user name.

    - by Zian Choy
    Environment: SQL Server 2005 Express Windows 7 When I installed SQL Server, I followed the instructions at http://msdn.microsoft.com/en-us/library/aa905868.aspx to set my computer's admin account as the SQL Server admin. However, when I try to access a database on my computer through Visual Studio 2008, I get the following error message: --------------------------- Microsoft Visual Studio --------------------------- The database 'Parkinsons' does not exist or you do not have permission to see it. Would you like to attempt to create it? --------------------------- Yes No --------------------------- Then, if I go to SQL Server and add a user to that database, I get the following error message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Create failed for User 'zian'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.2047.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Create+User&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Express.ConnectionInfo) ------------------------------ The login already has an account under a different user name. (Microsoft SQL Server, Error: 15063) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=15063&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Why doesn't VS piggy back on the dbo account? If the dbo account is unusable, then why won't SQL Server let me make an account so that I can access my own data?

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • OpenVZ with brdiged interfaces and VLAN

    - by Deimosfr
    Hi, I've got a problem with OpenVZ with brdiged VLAN. Here is my configuration : +------+ +-------+ +-----------+ +---------+ br0 |VE101 | | | | OpenBSD |----->| Debian |------->| | | WAN |--->| Router | | OpenVZ | +------+ | | | Firewall |----->| br0 br1 | br1 +------+ +-------+ +-----------+ +---------+------->|VE102 | |br0 | | |VLAN br0.110 +------+ v +---------+ |VE103.110| +---------+ I can't make VLAN working on br0 (br0.110) and I would like to understand why. I don't have any switch so no problem with unmanageable switch. I've configured a VLAN interface on OpenBSD in /etc/hostname.vlan110 : inet 192.168.110.254 255.255.255.0 NONE vlan 110 vlandev sis1 And it seams working fine. I've also adapted my PF configuration to work with VLAN but I don't see any incoming traffic. On my Debian lenny, here is my interfaces configuration : # The loopback network interface auto lo iface lo inet loopback # br0 auto br0 iface br0 inet static address 192.168.100.1 netmask 255.255.255.0 gateway 192.168.100.254 network 192.168.100.0 broadcast 192.168.100.255 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off # VLAN 110 auto br0.110 iface br0.110 inet static address 192.168.110.1 netmask 255.255.255.0 network 192.168.110.0 gateway 192.168.110.254 broadcast 192.168.110.255 pre-up vconfig add br0 110 post-down vconfig rem br0.110 It looks like ok, but when I start my VE, here is the message : ... Configure veth devices: veth103.0 Adding interface veth103.0 to bridge br0.110 on CT0 for VE103 can't add veth103.0 to bridge br0.110: Operation not supported VE start in progress... So I've got one error here. I've followed this documentation http://wiki.openvz.org/VLAN but it doesn't work. I've certainly missed something but I don't know why. Someone could help me please ? Thanks

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Jobs with anacron won't run

    - by mareser
    I would like to run two bash scripts daily using anacron in order to backup some data. Unfortunately I can't figure out why said scripts are not executed. For test purposes I let cron execute the scripts and it worked fine. cat /etc/anacrontab gives # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # These replace cron's entries 1 5 cron.daily nice run-parts --report /etc/cron.daily 7 10 cron.weekly nice run-parts --report /etc/cron.weekly @monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly 1 5 TB_bak /bin/sh /home/vasco2/Dropbox/Scripts/backup_TB.sh 1 5 key_db_bak /bin/sh /home/vasco2/Dropbox/Scripts/bak_key_db.sh The output of ls ~/Dropbox/Scripts/ is backup_TB.sh bak_key_db.sh I use Linux Mint Katya. uname -a gives Linux vasco2 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux I would be very happy if somebody could point me in the right direction on why those scripts won't get executed. P.S.: There is no anacron tag on superuser.com. Maybe somebody wants to change that.

    Read the article

  • Debian: What are these files in /sys/devices/pci0000:00/ for?

    - by muhuk
    I am running Debian Squeeze on an MSI M670 laptop. I have these following files on my root drive, each 256MB: file:///sys/devices/pci0000:00/0000:00:05.0/resource1 file:///sys/devices/pci0000:00/0000:00:05.0/resource1_wc Here is my lspci output: muhuk@debian:~$ lspci 00:00.0 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.2 RAM memory: nVidia Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: nVidia Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: nVidia Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: nVidia Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: nVidia Corporation C51 Memory Controller 2 (rev a2) 00:03.0 PCI bridge: nVidia Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: nVidia Corporation C51 [GeForce Go 6100] (rev a2) 00:09.0 RAM memory: nVidia Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: nVidia Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: nVidia Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: nVidia Corporation MCP51 PMU (rev a3) 00:0b.0 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: nVidia Corporation MCP51 IDE (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:0f.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:10.0 PCI bridge: nVidia Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 04:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 04:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 04:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 04:09.0 Network controller: RaLink RT2561/RT61 rev B 802.11g I am speculating these have something to do with the shared RAM my GPU is using. But why a file on disk? And why two of them?

    Read the article

  • How to change libcurl SSL backend from gnutls to openssl on Ubuntu server

    - by Jayesh
    I am getting gnutls specific errors in my Tornado webserver while processing Google OpenID SSL responses. One of the suggestions I got from Tornado mailing list is to try OpenSSL backend instead of gnutls. But it doesn't seem to be straightforward on Ubuntu server (11.10). On Ubuntu server, gnutls is provided by libcurl3-gnutls package and openssl curl support is provided by libcurl4-openssl-dev package. (I don't know why the later is named 4 and dev, but I couldn't find any other openssl+curl package in apt-cache search). I had libcurl3-gnutls installed by default, but not libcurl4-openssl-dev. So I installed the later and restarted Torando instances. But that didn't seem to work. I still got same gnutls errors. I found old discussions on curl mailing lists regarding the problems of supporting different SSL backends to libcurl, but didn't find exactly how is it done today. So far my guess is openssl is built into libcurl and gnutls is provided through separate package (that will explain why there is no libcurl3-openssl). But how do I make libcurl to pick up openssl backend and not gnutls? Is there some option in libcurl/pycurl API to do this? I tried uninstalling libcurl3-gnutls, but apt-get prompted that it will also remove python-pycurl along with it. So that won't do.

    Read the article

  • Can't connect to MySQL on CentOS 5 Error 13 - Permission Denied

    - by abszero
    Ok, I have an install of CentOS 5 running as a GuestOS in VirtualBox. The network card for the Cent box is bridged with that of my host OS so that the boxes can see each other. Cent has an IP of 192.168.1.108 and my Host box has an IP of .104. Everything, with regard to networking, seems to be working properly as I can access the Drupal install that is on the Cent box from a web browser on my host box by navigating to http://192.168.1.108 however when I try to configure the database for Drupal through the Drupal install interface I am getting the Can't connect to MySQL error. First I thought this might of been a Firewall issue so I stopped iptables but that had no effect. I thought maybe the user I had setup did not have access to the server so I tried root and that did not work. Searching on the net said that I needed to provide a bind-address parameter to my.cnf so I did that with no change. (As a side note the length of my my.cnf file was MUCH shorter than the ones presented online. In fact under mysqld all I have are datadir, socket, user, and bind-address. Is this normal or should the file be more verbose?) After a few hours of messing with permissions and such I tried using 'localhost' as the value for the database server, from my HOST OS, and the Drupal install kicked off without a problem. So while my issue is resolved I am curious as to why 'localhost' works and why 192.168.1.108 did not? Is there something i need to do to specifically access the MySQL box via the aforementioned IP? Thanks.

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • RAM being displayed is lesser than the actual in my Windows 7

    - by Prateek Somani
    I am using Windows 7 and Ubuntu on the same machine. Earlier I had 3 GB of RAM,but now the Windows is displaying just 1 GB of RAM. Please also find the output of the free command in my Ubuntu : total used free shared buffers cached Mem: 1008208 904808 103400 5736 13516 239596 -/+ buffers/cache: 651696 356512 Swap: 3127292 10252 3117040 Has the swap memory consumed my 2 GB of RAM? Will I be able to use the whole of 3GB of the RAM in my Windows? Regards, Prateek Update : I tried to run the lshw command and I got the following output : *-memory description: System Memory physical id: 1b slot: System board or motherboard size: 1GiB *-bank:0 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) product: HMT112S6BFR6C-H9 vendor: Hynix physical id: 0 serial: 2C71D069 slot: Bottom - Slot 1 size: 1GiB width: 64 bits clock: 1067MHz (0.9ns) *-bank:1 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) [empty] product: 16JSF25664HZ-1G4F1 vendor: Micron physical id: 1 serial: FD421821 slot: Bottom - Slot 2 width: 64 bits clock: 1067MHz (0.9ns) Why it is able to detect the vendor/product name of the bank-1 RAM, why can't it detect the RAM size and other details ? Has my RAM got faulty?

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Add shortcuts to (Windows + X) context menu

    - by KasiyA
    I want to add services.msc into Win+X context menu in windows 8 (x64). I know similar question is in here but it's not good with using Win+X Editor, because it doesn't add Underlined key for shortcuts that added with that and it's not good without having quickly underlined key. I want do that for maually Context menu folder is: C:\Users\User_Name\AppData\Local\Microsoft\Windows\WinX And hide desktop.ini files is as bellows (in ...\WinX\group2\desktop.ini) [LocalizedFileNames] 1 - Run.lnk=@%SystemRoot%\system32\shell32.dll,-12710 4 - Control Panel.lnk=@%SystemRoot%\system32\shell32.dll,-4161 5 - Task Manager.lnk=@%SystemRoot%\system32\authui.dll,-12139 3 - Windows Explorer.lnk=@%SystemRoot%\system32\shell32.dll,-22067 2 - Search.lnk=@%SystemRoot%\system32\shell32.dll,-30517 I copied sevices.msc shortcut into above path in group2 folder and add this line 6 - Sevices.lnk=@%SystemRoot%\system32\sevices.msc,????? in desktop.ini file. First Question: I don't know If this line 6 - Sevices.lnk=@%SystemRoot%\system32\sevices.msc,-????? that I added is correct or not? Also I don't know what to use instead of -????? Last Question: Why desktop.ini contents is not Sorted. I triyed to manually sort them but when I restart Explorer again it was become out of order.Why?

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Traffic shaping on Linux with HTB: weird results

    - by DADGAD
    I'm trying to have some simple bandwidth throttling set up on a Linux server and I'm running into what seems to be very weird stuff despite a seemingly trivial config. I want to shape traffic coming to a specific client IP (10.41.240.240) to a hard maximum of 75Kbit/s. Here's how I set up the shaping: # tc qdisc add dev eth1 root handle 1: htb default 1 r2q 1 # tc class add dev eth1 parent 1: classid 1:1 htb rate 75Kbit # tc class add dev eth1 parent 1:1 classid 1:10 htb rate 75kbit # tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst 10.41.240.240 flowid 1:10 To test, I start a file download over HTTP from the said client machine and measure the resulting speed by looking at Kb/s in Firefox. Now, the behaviour is rather puzzling: the DL starts at about 10Kbyte/s and proceeds to pick up speed until it stabilizes at about 75Kbytes/s (Kilobytes, not Kilobits as configured!). Then, If I start several parallel downloads of that very same file, each download stabilizes at about 45Kbytes/s; the combined speed of those downloads thus greatly exceeds the configured maximum. Here's what I get when probing tc for debug info [root@kup-gw-02 /]# tc -s qdisc show dev eth1 qdisc htb 1: r2q 1 default 1 direct_packets_stat 1 Sent 17475717 bytes 1334 pkt (dropped 0, overlimits 2782 requeues 0) rate 0bit 0pps backlog 0b 12p requeues 0 [root@kup-gw-02 /]# tc -s class show dev eth1 class htb 1:1 root rate 75000bit ceil 75000bit burst 1608b cburst 1608b Sent 14369397 bytes 1124 pkt (dropped 0, overlimits 0 requeues 0) rate 577896bit 5pps backlog 0b 0p requeues 0 lended: 1 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 class htb 1:10 parent 1:1 prio 0 **rate 75000bit ceil 75000bit** burst 1608b cburst 1608b Sent 14529077 bytes 1134 pkt (dropped 0, overlimits 0 requeues 0) **rate 589888bit** 5pps backlog 0b 11p requeues 0 lended: 1123 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 What I can't for the life of me understand is this: how come I get a "rate 589888bit 5pps" with a config of "rate 75000bit ceil 75000bit"? Why does the effective rate get so much higher than the configured rate? What am I doing wrong? Why is it behaving the way it is? Please help, I'm stumped. Thanks guys.

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • Memory modules not running at rated speeds.

    - by Wesley
    Hi all, I'm having some odd memory issues with my build. Here are my specs right now: QDI Superb 4 motherboard Intel Pentium 4 Northwood 2.4GHz (512KB L2, 533MHz FSB) 3x 256MB PC2100 DDR266 RAM 16MB NVIDIA TNT2 Pro AGP Seagate 80GB IDE HDD Generic USB 2.0 PCI Generic Modem PCI Bestec 250W PSU To be even more specific, here are the current brands and models of each module: Kingston KVR266X64C25/256 Samsung PC2100U-25331-Z SMART SM5643285D4N0CHM0H Supposedly, they are all PC2100 266MHz modules with a latency of 2.5. Looking in Speccy, the Kingston module is somehow running at a speed of PC2300 ~284MHz. I've never overclocked RAM at all as I don't know how to. However, when I first started the computer, I had the SMART module in place first and then reset the BIOS settings, including the integrated overclocking options. However, this still doesn't explain why the Kingston module runs at a higher speed than the SMART and Samsung module. Why is it like this? On a side note, where could I find the motherboard manual for the QDI Superb 4? Thanks in advance.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Possible boot conflict?

    - by Evan Kroske
    I was installing Ubuntu on a computer on which Windows XP was already installed. The computer has multiple hard drive bays, so I decided to remove the XP HDD and install Ubuntu on a blank HDD when it was the only HDD in the system. Unfortunately, if I now try to boot Ubuntu with the Windows XP drive in the second slot, nothing will boot. However, if Windows XP is in the first slot, it will boot fine. Can anybody explain why this happens? When I was checking out the BIOS to see if something was messed up, I discovered that when Ubuntu is in the first slot, the BIOS doesn't recognize any HDDs. However, if XP is in the first slot, the BIOS recognizes both drives. Any hypotheses about why this happens? Edit: Here's the setup. I have an old server with seven SCSI HDD slots. I have five identical 68 Gb SCSI drives, but I can keep only two plugged in. XP is still installed on the first drive, but I reinstalled Ubuntu on the second drive and had Grub overwrite the XP bootloader on the first drive. Now, the setup works fine, and I can use Grub to load either XP or Ubuntu. However, if I plug in another identical blank HDD in the third slot, the computer recognizes only the XP drive and doesn't boot. Grub starts to load, then gives me a "disk not found" error. Running ls from the grub rescue prompt only shows one drive with two partitions. I guess this is a BIOS problem, but I'd still like to know what triggers it. What about a blank drive could cause the BIOS to freak out?

    Read the article

  • Apache can't be restarted after changes to the configuration file

    - by Sharifhs
    Hello, I can't successfully configure the apache and php configuration files, can anybody help me in this way? Apache 2.2.16 (win32-x86-no_ssl.msi) was installed into “C:\Apache2.2 “location. Then PHP 5.3.3 (VC9 x86 Thread Safe) zip file was downloaded and extracted on “C:\php” location. From “C:\php” I renamed the “php.ini-development” file into “php.ini” “php.ini” file was opened with notepad, and modified as: doc_root = "C:\Apache2.2\htdocs" extension_dir = "C:\php\ext" The following lines were added to the Apache's configuration file “httpd.conf”: LoadModule php5_module "C:/php/php5apache2_2.dll" AddType application/x-httpd-php .php PHPIniDir "C:/php" N.B.: Thanks all for comments and answer, but I can't reply none your comments, I don't know why. May be I'm not privileged to put any comment as I'm new here (is it the case?)! That's why I'm to edit my post to reply you all. Tell me what can I do? @ jer.salamon: do you want me to post full httpd.conf file? It'll be longer then! @ davr: the server started first, but when I configured those files, its never started again @jer.salamon: did you mean keeping this way: doc_root = extension_dir = "ext" It not yet restared!

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >