Search Results

Search found 8984 results on 360 pages for 'space borg'.

Page 282/360 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • Vista Screen resolution Changes when Switching Users

    - by Benjol
    I regularly have a problem when switching between users in Vista - the screen resolution drops down to 800x600. If I try to set the resolution back to the maximum, it says nothing, but just keeps it at 800x600. I can set it back to an intermediate value. Otherwise I have to either restart the PC, or sometimes if I log off one of the users, I can then set the resolution back to max. Might it also have something to do with using sleep mode instead of performing regular shutdowns? I thought it might be related to the desktop background image taking up too much space, but even with plain colours, the problem still occurs. There is an enormous thread on this here, but not really any answers. From what I can gather from that thread, it isn't related to any particular applications, nor limited to a particular make of graphics card or monitor, so I don't think that including hardware details is useful. This is a very annoying problem, as it screws up my desktop and screen layout every single time. Has anyone here experienced this problem or found a solution? I've noticed that Windows Update has tried to install nVidia updates and apparently they've failed on several occasions. Not sure if that is of any relevance or not. UPDATE The last post on the thread: FWIW - I had this problem for about 2 years and wrote a number of posts in this thread in the past. It survived OS reinstallation, change of practically all of my hardware piece by piece (mobo, cpu, monitor, graphics card, memory, power supply...) I used to be affected by this annoying problem at least once every 24-48 hours. About 1.5 months ago I wiped out my 32 bit vista ultimate installation and installed Windows7 ultimate 64 bit from scratch and never saw this problem again. GOOD RIDDANCE. Vista was a pathetic piece of __ that felt like a flashback to the old [horrible] NT4/Windows95 days. I was seriously considering switching over to Apple/Mac OSX if this problem persisted.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • CentOS vps is randomly rebooting

    - by develroot
    I have a centos vps (Parallels Virtuozzo container) which has been running for months. However, a few days ago it started to randomly reboot itself, and i can't find out why. And the biggest problem that i don't understand is that it takes 40 minutes to reboot (as far as i can see in the logs) root ~ # cat /var/log/messages | grep shutdown Oct 11 13:52:11 vps27 shutdown[23968]: shutting down for system halt Oct 14 14:55:17 vps27 shutdown[30662]: shutting down for system halt Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt And notice the time difference between shutdown and xinetd's start: Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt Oct 15 06:21:24 vps27 init: Switching to runlevel: 0 Oct 15 06:21:27 vps27 saslauthd[30614]: server_exit : master exited: 30614 Oct 15 06:21:38 vps27 named[30661]: shutting down Oct 15 06:21:47 vps27 exiting on signal 15 Oct 15 07:04:34 vps27 syslogd 1.4.1: restart. Oct 15 07:05:06 vps27 xinetd[1471]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Oct 15 07:05:06 vps27 xinetd[1471]: Started working: 0 available services And here's what Parallels Power Panel says in terms of Status Changes: Time Old Status Status Obtained Oct 15, 2011 06:23:46 AM Mounted Down Oct 15, 2011 06:22:31 AM Running Mounted Oct 14, 2011 03:06:48 PM Starting Running Oct 14, 2011 03:06:23 PM Down Starting Oct 14, 2011 03:06:08 PM Mounted Down Oct 14, 2011 02:58:24 PM Running Mounted For some reason it's getting into Mounting mode and then restarts itself. The only problem that i can imagine is disk space utilization, which is now 84%. But can that be a reson for system halt? Time Category Details Type Parameter Oct 15, 2011 07:08:33 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:27:23 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:23:50 AM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used Oct 14, 2011 03:06:24 PM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 83 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 14, 2011 03:05:50 PM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • How to serve media across home network?

    - by TK Kocheran
    I'm looking to share my media across my home network. Router fully supports running a DLNA server, but I don't know if it'd be better to run the server from my main server computer instead of from the router, as the router would have to operate off of a network share and my server can operate directly off of the files. Here's what I need to serve, in order of importance: ISO 1:1 DVD rips (4-8GB files), MP4/H.264 encoded videos, MKV videos, MP3 files, JPEG/CR2 images. Maybe I'm completely ludicrous for wanting to push full DVD files across my network, but in reality, I would assume that only the parts of the actual file needed (ie: menu, main video payload for main title) would be served at any one time. Plus, encoding takes time and precious disk space, so why not stream it 1:1 ;) Does anyone know of the best way to accomplish this? Main goal is to serve it to Logitech Revue downstairs and secondary goal is to serve it to other computers in the house. For music, I assume I could run a DAAP server, but I don't think that the Revue supports that (and I can't exactly throw together an app that does it just yet).

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • This .mpg video clip doesn't play well

    - by Roey
    I've installed K-lite mega codec pack v6.9.0 with playback essentials without player. My default and only media player is windows media player. here are the clip's media info: General Complete name : D:\Users\Roey\Downloads\B384MV.mpg Format : MPEG-PS File size : 273 MiB Duration : 4mn 59s Overall bit rate : 7 643 Kbps Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@High Format settings, BVOP : No Format settings, Matrix : Default Format settings, GOP : M=1, N=15 Duration : 4mn 57s Bit rate mode : Variable Bit rate : 7 363 Kbps Nominal bit rate : 9 000 Kbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.142 Stream size : 261 MiB (96%) Audio ID : 192 (0xC0) Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Duration : 4mn 59s Bit rate mode : Constant Bit rate : 128 Kbps Channel(s) : 2 channels Sampling rate : 44.1 KHz Compression mode : Lossy Stream size : 4.56 MiB (2%) Menu When I play it there is no sound (just a little "kahhhh" noise every 10-20 seconds) and the frames are moving very slow - it "jumps" frames. A blue tray icon [FFa] "ffdshow audio decoder" pops with the following details: Input:MP3, stereo, 44100 Hz (libavocodec) Output:PCM, stereo, 44100 Hz, 16-bit integer Any help will be much appreciated. Thanks

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • Is my OCZ SSD aligned correctly? (Linux)

    - by Barney Gumble
    I have an OCZ Agility 2 SSD with 40 GB of space. I use it as a system drive in Debian Linux (Squeeze) and in my opinion it's really fast. But I've read a lot on aligning partitions and file systems... And I'm not sure if I succeeded in aligning the partitions correctly. Maybe the SSD could be even faster?? ;-) I use ext4 and here is the output of fdisk -cul: Disk /dev/sda: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78161328 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: [...] Device Boot Start End Blocks Id System /dev/sda1 * 2048 73242623 36620288 83 Linux /dev/sda2 73244670 78159871 2457601 5 Extended /dev/sda5 73244672 78159871 2457600 82 Linux swap / Solaris My partitions were created just by the Debian Squeeze setup assistant. So I didn't care about the details of partitioning. But now I think maybe the installer didn't align it correctly? Actually, 2048 looks good to me (better than odd values like 63 or something like that) but I've no idea... ;-) Help plz! According to some "SSD Alignment Calculator" I found on the web, the OCZ SSDs have a NAND Erase Block Size of 512kB and their NAND Page Size is 4kB. 2048 is divisible by 4 and 512. So are the partitions aligned correctly?

    Read the article

  • dpkg broken while upgrading Debian Etch to Lenny

    - by artvolk
    Good day! While trying to recover a box to lenny it seems I've broken things. It upgrades libc and glib after that dpkg seems to be broken. I can run apt-get, but it gets segmentation fault from dpkg: # apt-get -f install Reading package lists... Done Building dependency tree... Done 0 upgraded, 0 newly installed, 0 to remove and 316 not upgraded. 9 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. /bin/sh: line 1: 4606 Segmentation fault /usr/sbin/dpkg-preconfigure --apt E: Sub-process /usr/bin/dpkg received a segmentation fault. I can login via SSH but even ls is not working: # ls Segmentation fault Is there anything I can do remotelly via SSH? # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7fc8000) libacl.so.1 => /lib/libacl.so.1 (0xb7fc2000) libselinux.so.1 => /lib/libselinux.so.1 (0xb7fac000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7e51000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7e3f000) /lib/ld-linux.so.2 (0xb7fd8000) libattr.so.1 => /lib/libattr.so.1 (0xb7e3b000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e37000) libsepol.so.1 => /lib/libsepol.so.1 (0xb7df6000) It seems I've temporary fixed it with: # touch /etc/ld.so.nohwcap From here: http://saintaardvarkthecarpeted.com/blog/archive/2005/08/_etc_ld_so_nohwcap.html

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • Windows 7 shows a drive as full in summary but files shown on drive are very small

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows 7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right-click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties window shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • Preventing h/w RAID cards from dropping slow JBOD disks

    - by Kevin
    I'm considering buying a used SAS h/w RAID card for externally attaching HDDs to an HP ProLiant I'm setting up. However, I only require RAID functionality on some of the drives. Theoretically it should be simple to JBOD the other drives, but some of them are inexpensive SATA disks and probably cannot have TLER disabled. I'd like to know, prior to actually ordering a RAID card, whether typically RAID cards would still enforce dropping of disks that do not respond within a few seconds, even if the disk is in a JBOD, and whether there is any way to disable this. Ideally it would be nice to be able to select certain SAS ports that will be pass-through, bypassing the RAID engine entirely and just acting as an HBA for those ports. I know I could buy a separate SAS HBA but that seems like a waste of $ and is also impractical as it's a 1U server so space is extremely limited. My question then is whether the functionality I'm looking for (pass-through on certain ports or at least JBOD drives not getting themselves dropped due to slow response) is typical of proper h/w RAID cards such as PERC 5/E etc. I've browsed through the latter's manual but unfortunately, as with most user manuals, it states the obvious and doesn't state the unobvious. Thanks for any info, Kevin

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • I go to www.facebook.com, but a completely different site appears.

    - by Rosarch
    I am going to www.facebook.com, but the site that appears is totally different. This occurs on Chrome 6+, IE9, and FF 3+. What could be happening? Is this a security risk? Facebook was working just fine, then all of a sudden this happened. Update: The same problem occurs on my netbook. Update 2: When I go to http://69.63.189.11/, it works fine. So... DNS problem? How do I fix? Update 3: Checked the hosts file: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Looks like it hasn't been altered.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >