Search Results

Search found 21963 results on 879 pages for 'lance may'.

Page 636/879 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • How to set only specific nginx server block into maintenance mode programmatically

    - by Ville Mattila
    I am looking for a solution to automate one of our application's deployment process. In the beginning of deployment, I would like to programmatically set the specified server into maintenance mode and finally after the deployment has been completed, remove the maintenance mode flag from the nginx server. By maintenance mode, I mean that nginx should response with HTTP Response Code 503 to all the requests (with possible custom page). I know how to set the server block to respond with 503 code (see http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/) but the question is about how to do this programmatically and most efficiently. Two options have came to my mind: Option 1: At the beginning of the deployment process, write a maintenance file into document root and conditionally check an existence of the maintenance file in nginx server config: server { if (-f $document_root/in_maintenance_mode) { return 503; } } This method contains certain overhead as the file existence is checked for each request. Is it possible to check the file existence only when loading the nginx config? Option 2: Deployment script replaces the whole nginx server configuration file with a maintenance version and swaps it back in the end of the deployment. If this method is used, I am concerned about possible other automation processes like puppet that may be override the maintenance configuration file.

    Read the article

  • How do I redirect my website from non-www to WWW using Apache2?

    - by Andrew
    I'm currently trying to set up my personal webpage. I am using a VPS and have manually installed Wordpress, and everything seems to work... except if I go to the non-www version of my website, it comes up with a page not found. www.andrewrockefeller.com <-- Works andrewrockefeller.com <-- Does not (and I want to redirect it to www.andrewrockefeller.com) I have tried adding RewriteEngine functionality to my .htaccess, and that isn't working. I have also tried adding the 'most-voted' method of adding to my default file (which apache2.conf pulls from: <VirtualHost *> ServerName andrewrockefeller.com Redirect 301 / http://www.andrewrockefeller.com/ </VirtualHost> Seeing how many people are able to get the above working, is there something else I may be missing to allow that to function? Thank you for your time! EDIT: My .htaccess file is as follows: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The #Wordpress section was autocreated when I changed the settings from ?p=1 (ugly links) to prettylinks. Any proposed solutions I've found on here I've tried out and restarted apache2, and it hasn't worked.

    Read the article

  • USB Device Not Recognized

    - by Franky Chanyau
    Ok this one gets a little bit complicated but bare with me :D A client brought her computer in to be fixed about a week ago, she says she tried charging a new phone she bought from china and immediately after her usb keyboard and mouse stopped working (typical). I had a quick look at it but because I did not have time, I did a simple system restore and it seemed as if the issue was fixed. I promptly sent it back to her but a few days back she called saying that the issue has returned. Turns out the computer was riddled with some virus that also corrupted her XP install so I had to format the whole thing(yes I tried repairing). I hoped that the format would fix the keyboard and mouse issue but the whole thing has escalated and the computer will throw the "USB Device not recognized" error when I plug anything into the many usb ports it has. I have installed all the drivers (including the chip set drivers) for the pc and even tried the unplugging from the power for a while trick, still no luck. I am sure it is not a hardware issue, but may be wrong. This is way over my head. Can anyone help? Computer: HP Compaq DC7100, Intel Pentium 4, 512mb RAM OS: Windows XP Professional SP2

    Read the article

  • VirtualBox communication from Linux to/from Windows 7

    - by J. Otto Tennant
    VirtualBox is running in Windows 7 as the host. VirtualBox has the two modifications (one is called Guest Additions; don't remember the other). The Virtual machine has "bridged" networking selected. I have SAMBA set up (now, the problem may be here; it has been three or four years since I last did this) on the Linux guest machine. Neither guest nor host sees the other. From the Windows 7 command prompt, the IP address of the Linux guest pings. The IP address of another computer (a separate Windows 7 on the wireless network) pings from the Linux guest. (I have no idea what IP address the Windows 7 host itself has. The output of "netstat" does not seem to be useful.) So, it seem to me that something should be working. The only workgroup on the LAN is inventively named WORKGROUP. SMB4K should be seeing something. There must be a simple setup step that I am missing. (FWIW, there are two processes running smbd, and no process is running nmbd. YaST says that nmbd is set to run. I am not sure what this means.)

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Inconsistent DHCP replies with Windows 2008R2 DHCP server

    - by verbalicious
    I've got a Windows 2008R2 standard server running DHCP services. We've noticed that certain clients are receiving inconsistent DHCP replies. We have over 175 Windows workstations in this VLAN that don't seem to have trouble getting DHCP leases. However, PXE-booting clients trying to reach our DHCP server are able to get a lease inconsistently. Additionally, we tried using the "dhcping" tool against our DHCP server and found that roughly two of every three requests time out with "no answer" -- and this holds true when we set the timeout value on dhcping to 20seconds. After a failed attempt, however, we may get a dhcp lease reply immediately with dhcping. This leads me to believe that this issue isn't confined to PXE booting clients, but something more systemic with my LAN layer2 or DHCP. And that possibly my 175 windows clients are experiencing this in some form without my knowledge. We have over 30% of our scope available so the addresses are there. I was unable to find anything in the Windows server "DHCP-Server" log. Of course, my goal is to have my DHCP server reply to every request that it receives on the LAN!

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • Self-Resetting Power Strips?

    - by Justin Scott
    We are about to deploy a number of secure kiosks into an environment where they may be prone to lightning strikes and power surges on a somewhat regular basis (southern Florida in a place where the existing electrical infrastructure is, shall we say, a bit out of date). Ideally we would use battery backups on each system, but it's not in the budget. We plan to use a standard power strip with a circuit breaker built-in to protect the computers, but management has asked if there is a power strip that can reset itself after the breaker has been tripped. I've looked around and wasn't able to find such a beast, and it seems to me that it would probably be a safety issue for such a product to exist (e.g. if something plugged into the strip is drawing a lot of current and trips the breaker, you wouldn't want that resetting itself to prevent a possible fire). Nevertheless, if anyone has experience with such a product or can point me in the direction of something that would allow the breakers to be reset automatically or remotely (we don't want to have to send someone to each kiosk every time there is a power surge) I would appreciate any tips.

    Read the article

  • how do you add an A record for a root domain

    - by nbv4
    this seems really simple, but I can't figure it out. I'm using xname.org since it's free and I own a bunch of domains spread over a few different registrars. The setup I desire is very simple: one A record that points the bare domain name to my server, plus a wildcard CNAME record pointing all subdomains to the same server. So if the user goes to domain.com it will point them to 285.24.435.75, if they go to www.domain, blah.domain.com, or any other sub domain, they'll get sent to 285.24.435.75. All the examples I read on the internet about setting up A records all have the A record set to a subdomain such as www. WWW is deprecated so I want to have noting to do with it. Currently my xname.org zone looks like this: $TTL 86400 ; Default TTL domain.com. IN SOA ns0.xname.org. nbvfour.gmail.com. ( 2010052503 ; serial 10800 ; Refresh period 3600 ; Retry interval 604800 ; Expire time 10800 ; Negative caching TTL ) $ORIGIN domain.com. IN NS ns2.xname.org. IN NS ns0.xname.org. IN NS ns1.xname.org. @ IN A 65.49.73.148 * IN CNAME domain.com The '@' symbol is something that the godaddy domain interface uses to mean "this root domain', but that may have been specefic to that interface and has no meaning here. Before I had a 'www' entry in the A rcords and it worked in the sense that I could ping 'www.domain.com' and it returned a response, but pinging the root domain 'domain.com' returned "no host found".

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • How to set up GRUB2 chainloader to other Grub (Fedora, Debian) on GPT

    - by basic6
    I'm trying to set up a dedicated GRUB2 which (chain-)loads another GRUB on a disk with GPT partition table. Relevant partitions: /dev/sda1 BIOS_BOOT /dev/sda2 BOOT (ext2) /dev/sda3 FEDORA (ext4) /dev/sda6 DEBIAN (ext4) I installed Fedora first, using /dev/sda2 as boot partition. Then I installed Debian. The Debian installer recognized the Fedora installation and added it as boot entry, then installed its GRUB into the MBR. While this works for the moment, it's pretty messy, because every Debian update may change the boot config, removing the Fedora entry (tried it) and the other way around. That's why I want both systems to have their own boot loader and one main boot loader (that could reside on /dev/sda2), which loads one of them. This is what I've tried: Moved everything from /dev/sda2 to /dev/sda3/boot Removed /boot mount point in Fedora (so /dev/sda2 isn't used anymore) From a live Linux, installed GRUB2 to the MBR (grub-install --boot-directory=sda2 /dev/sda) Wrote a menu.lst: title Fedora root (hd0,2) chainloader +1 (Again, for Debian) Converted that to a grub.cfg script (grub-menu2cfg or something like that) When booting, actually got a GRUB2 menu with "Fedora" (and "Debian") When selecting any one of those: error: invalid signature Issued "grub-install /dev/sda6" (and ...sda3) from all kinds of live Linux systems, all of which failed with another error message (in the case of the Debian installer, without explanation at all) Added --force to the chainloader line, now it says "loading", then reboots Found douzens of howtos, none of which seem to work for me Since I get the self-made GRUB2 menu on bootup, I've at least successfully installed the first stage of GRUB, right? When trying to chainload, some signature is checked and seems to be wrong - how do I fix it? The boot menus (Fedora with its different Kernel versions and Debian with Debian and Fedora as well) are now on the system partitions (/dev/sda3, /dev/sda6), is there anything else to do on these partitions, so they can be chainloaded? Any help is greatly appreciated.

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • Internet Explorer 8 Loses Cookies

    - by Mikeon
    I'm running Windows 7 for some time now and use Internet Explorer 8 as my main browser. What I've noticed is that it "loses" cookies A LOT! I mean it! Typical situation: I log in into a side checking the remember me checkbox. I reboot the computer/restart the browser, go to the site, get logged in automatically - I'm happy. From time to time however, I'm asked for the credentials. Normal situation you would say. So would I if it didn't happen few times a week. Come on! On Internet Explorer 7 I didn't notice this as much. Cookies were lost once a quarter or so. Note that i was using IE7Pro with my IE - dunno however if it has anything to do with my current problem. Anyway I wonder if this behavior is "normal" or is it only me? more info for people that suggest it may be normal - cookie expiring and stuff. When it happens I loose all auth cookies - gmail, bloglines and whatnot!

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • Investigating a potential CPU failure

    - by Jernej
    On a Ubuntu server that I am using for computations I have recently observed that some CPU extensive programs (GUROBI,CPLEX) often segfault. Being in correspondence with tech support of the respective programs I was suggested that it may be a hardware issue. The administrator of the server performed a detailed memtest and it turned out that the RAM modules appear to be fine. Hence I've used the tool mprime to test the CPU and the following two lines appear multiple times durring the execution of the stress tests: [Worker #4 Oct 18 18:47] FATAL ERROR: Rounding was 0.498046875, expected less than 0.4 [Worker #4 Oct 18 18:47] Hardware failure detected, consult stress.txt file. The stress.txt file in itself is not very verbose about what could be the cause of this error so I would like to ask whether anyone here happens to know what could be the cause of this issue? Is there some other test I could perform to nail the problem even further? The temperature of the system (and all cores) was fine during the entire stress test (+69.0°C (high = +80.0°C, crit = +98.0°C)) the CPU in question is a Intel Core i7-2600K CPU @ 3.40GHz and is not overclocked or modified in any way. Also what is interesting that if I run mprime to only stress the CPU all tests pass fine. The error is only triggered when I let mprime stress the CPU+RAM.

    Read the article

  • Problem installing SQLite3 RubyGem on Ubuntu

    - by misbehavens
    I am having a problem trying to install the SQLite3 RubyGem. Here's what I'm doing: $ sudo gem install --remote sqlite3-ruby Here's the output: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for fdatasync() in -lrt... yes checking for sqlite3.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Wireless to Wireless Transfer Slow on a Linksys WRT54GL

    - by Kyle Brandt
    The Situation: When I try to transfer a file from one computer to another that are both connected via wireless on a WRT54GL (in a office) with dd-wrt firmware I often get bad speeds. In generally they average around 100 kilobytes a second. Either computer can download via wireless from the Internet at at about 2 megabytes a second. The speed is slow with the transfer of one large file. There are about 20 other wireless networks that the computers can see, so there is a lot of noise, but I don't have the equipment to really monitor the frequencies well. But that still seems pretty slow. I thought maybe it was the transmit on each card, but even when they are 5 feet away with a line of sight I still get these speeds. According to Linux both cards are operating at 54g. My Questions: Is this normal for this sort of consumer level wireless equipment? Anything I can do to improve it? why is wireless to wireless transfer slow when everything else isn't? Whats steps might I take to figure out what is happening? For example, are lots of packets not making to the access point requiring retransmissions? Above all, I want to find out what the problem actually is. This may seem odd, but at this point I am more interested in understanding what the problem is than fixing it. What I have tried: I have tried messing with lots of settings. Different channels, xmit power, G-Only, none of which has made anything any better. I've also tried upgrading to newer dd-wrt firmware version and doing a reset to wipe out the settings.

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • Is Unix not a PC Operating System?

    - by Corelgott
    I am doing my Bachelor at a university. In a written assignment the professor posted the task: "Name 3 PC-Operating Systems". Well, I went on an included a variety of OS (Linux, Windows, OSx) including Unix & Solaris. Today I recieved a mail from my prof saying: Unix is not a PC-Operating System. Many Unix-variants are not PC-hardware compatible (like AIX & HP-UX. About Solaris: there was one PC-compatible version...) I am kind of suprised: Even if may Unix-variants are Power-PC and different bit-order – Those don't stop being PCs now, right? The question was given in a written assigment! It was not a question that came up during lecture! Due to the original task being in German, I'll include it just to make sure nobody suspects an error in the translation. Frage: Nennen Sie 3 PC-Betriebssysteme. Antwort: Unix ist kein PC-Betriebssystem, viele Unix-Varianten sind nicht auf PC-Hardware lauffähig (AIX, HP-UX). Von Solaris gab es mal eine PC-Variante.

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >