Search Results

Search found 19606 results on 785 pages for 'the thing'.

Page 550/785 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • reset locale in debian under Squeeze

    - by si2w
    I have problems with locale in debian. I tried many thing but it doesn't anything for me : locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory C POSIX en_US.utf8 I try to set en_US.utf8 without success with this :dpkg-reconfigure locales -plow perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). After reboot, i try to use a perl script : perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Here is my /etc/default/locale config file : cat /etc/default/locale LANG=en_US.UTF-8 LANGUAGE=en_US Any idea to solve this (stupid) problem ? Thanks

    Read the article

  • Problem with Outlook 2010 (SMTP AUTH LOGIN)

    - by Filipe YaBa Polido
    **IGNORE THIS QUESTION - SOLVED WITH A PYTHON SCRIPT available at: http://yabahaus.blogspot.com I have to connect one customer Outlook 2010 to a remote server on which I have either no right, neither way to talk to the sysadmin. This is the thing, after installing and reviewing the logs on Wireshark: Outlook Express: HELO machine AUTH LOGIN username base64 encoded password base64 encoded mails go through. Outlook 2010: HELO machine AUTH DIGEST-MD5 response from server Outlook sends just a * AUTH LOGIN password base64 encoded So... I can send mails in the same domain, but can't send outside, it gives me a relay denied message. My point is... Why the h**l Outlook 2010 doesn't send the username AND the password?! It can never login the right way :| With other versions of Outlook it works fine, and with OE works great, it auths and allows to send mail to a different domain. I've googled and nothing worked. I'm pretty sure that I'm not alone with this one. My last resort will be to configure a local proxy/server that relays to the original one :| Any help would be appreciated. Sorry my bad english as is not my natural language. Thanks.

    Read the article

  • Windows 7 boots to black screen with blinking cursor

    - by murgatroid99
    I have an Alienware M17x that dual boots into Ubuntu 11.04 and Windows 7 Home Premium. Currently, the computer starts at the GRUB loader and will boot into Ubuntu, but if I try to boot into Windows, I immediately get a black screen with a blinking cursor in the upper left corner. The output of fdisk -l is Device Boot Start End Blocks Id System /dev/dm-0p1 1 5 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/dm-0p2 6 1918 15360000 7 HPFS/NTFS Partition 2 does not start on physical sector boundary. /dev/dm-0p3 * 1918 64772 504878877+ 7 HPFS/NTFS Partition 3 does not start on physical sector boundary. /dev/dm-0p4 64772 77827 104858625 5 Extended Partition 4 does not start on physical sector boundary. /dev/dm-0p5 64772 67204 19531008 83 Linux /dev/dm-0p6 67204 74498 58593536 83 Linux /dev/dm-0p7 74498 77577 24731648 83 Linux /dev/dm-0p8 77578 77827 2000128 82 Linux swap / Solaris I have used the Windows rescue CD, and run the automatic error fixer until it finds no errors. I have run chkdsk /R on both the main Windows 7 (/dev/dm-0p3) partition and the recovery partition (/dev/dm-0p2). I set the main Windows 7 partition to be active. I also tried running in the recovery console the commands bootrec /fixmbr bootrec /fixboot bootrec /rebuildbcd None of these helped and the last set of commands deletes grub, which I then have to reinstall from Ubuntu. I think the last thing I did in windows before this started was install the newest ATI driver for my video card. This would suggest using system restore, and I actually had a restore point earlier (after the problem started), but after whatever I did that restore point does not appear in the list on the recovery disk any more, so I cannot do a system restore. Is there anything else I can try to make Windows boot properly again? Edit: Running the suggested commands bootsect /nt60 c: bcdboot c:\windows /s c: was also ineffective.

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • CUPS printer on Vritual Machine can be access via CUPS admin, but not by XP?

    - by SJaguar13
    I have a Zebra label printer connected to a Linux Mint virtual machine. It was set up with CUPS and a Windows XP computer can then print to it via http://192.168.1.76:632/printers/labelprinter. That was all fine and dandy I then hooked up a Fargo Pro L PVC card printer to a Windows XP virtual machine. I had to disconnect the label printer as the server that hosted both virtual machines only has 1 parallel port. Now I plugged in the Zebra again, and it cannot print from the Windows XP computer anymore. If I go to the CUPS admin panel on the Windows XP computer, I can see it, everything looks fine, and I can send it a test page to print which works. If I try to print from Windows, I get an error that the printer is not found/cannot connect to the server. The only other thing that changed was the firewall on the router to allow remote desktop to another computer from outside the network, but all the firewall stuff was for external use. Nothing affected the IP address of the internal network. The Linux Mint VM also had a PDF pritner that was shared with CUPS. That printer is also down. I tried setting up a new CUPS installation on another VM, and when I go to share it with XP, I get the same error. I don't know what to try. It has access, it can get to the admin from that computer, it seems to be up and ready, but when Windows tries to connect, the printer isn't found even though 4 days ago everything was fine. Any ideas?

    Read the article

  • Gmail won't forward mail sent to myself.

    - by BHare
    I own a dedicated server with a domain, we'll say foobar.com. I use google apps to manage my email SMTP servers. Now I don't check two gmail inboxes. I have my own personal one, and then I have foobar.com's inbox from google apps. Naturally the easiest thing to do is just have all foobar's emails forwarded to my personal one. So then I am only checking 1 inbox. This is all fine and dandy. I use MSMTP that with a wrapper that uses /etc/aliases. I have it set so any mail attempting to go to root (Things from cron, etc) will go to [email protected]. So when google app's (foobar.com) gets an email from the email I have setup with it ([email protected]), it automatically doesn't forward the message. This is a "feature" to gmail/google apps I suppose. How do I get around it? workarounds? etc. I could just have my alias set to my personal email but I wanted a place to have all foobar related emails archived in one place (googleapps).

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • Mutual piping on linux

    - by user21919
    I would like the output of A to be input for B and at the same time the output of B to be the input for A, is that possible? I tried the naïve thing: creating named pipes for A (pipeA) and B (pipeB) and then: pipeB | A | pipeA & pipeA | B | pipeB & But that does not work (pipeB is empty and switching the order would not help either). Any help would be appreciated. Example: Command A could be compiled form of this C program: #include <stdio.h> int main() { printf("0\n"); int x = 0; while (scanf("%d", &x) != EOF) { printf("%d\n", x + 1); } return 0; } Command B could be compiled form of this C program: #include <stdio.h> int main() { int x = 0; while (scanf("%d", &x) != EOF) { printf("%d\n", x + x); } return 0; }

    Read the article

  • Unable to sync iPod Touch with my PC

    - by alex
    I'm trying to sync a first gen iPod Touch to my PC running Windows 7 64 bit. The problem is that whenever I connect the iPod, iTunes completely freezes (if I start iTunes after connecting the iPod it will simply hang until it's physically disconnected from the PC). I reinstalled iTunes thinking that it had been corrupted, but without any luck. I've had this problem with all the latest versions of iTunes. I've also tried using MediaMonkey and DoubleTwist. None of these apps see the iPod as being connected; DoubleTwist also freezes, just like iTunes. The really strange thing is that I was able to sync the iPod with this PC a while back, but I now seem to have lost that ability. I don't know what changed. Windows detects the device every time it's plugged in (I can see it in Device Manager and I can browse all photos on iPod as if it were a camera). Also, I can sync it to iTunes on Mac OS X without any major problems.

    Read the article

  • Virtualization in Ubuntu 9.10

    - by Jeff Dege
    I have an existing Centos 5 installation. I would like to upgrade to Ubuntu. Thing is, I don't want to be down for as long as it will take to get my entire environment moved over - software installed, connectivity configured, etc. I'd like to take it one step at a time. But I don't really want to keep rebooting back and forth from the new OS to the old OS. That's what I did last time I upgraded to a new OS, and it got old real fast. So, since my new MB is virtualization-ready (AMD Phenom II 945 quad-core), I figured I could create a virtual machine, under the new OS installation, that ran the old OS installation. The problem is that the documentation I've been able to find has been pretty sparse. I've found a lot of possibilities, and little info on which would be capable of doing what I want. I have a new Ubuntu 9.10 installation, and a second disk containing the Centos 5 installation. And I don't know where to go next. Any help would be appreciated.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Contacts in Outlook 2003/2007, some questions

    - by Ernst
    If I create a distribution list and then select members, can I see different fields than the default ones? In 2007 there are radio buttons for 'name only' and 'more columns', but the latter does seem to only result in no results at all, regardless of which address book I choose. In 2003 there is no such thing. Is there a plug in that will break up the recipients (whether they be to, cc, or bcc) in groups of X, and send then a number of mails as required? Our host allows only 50 recipients per mail and only 300 total recipients per 5 minutes. I know the email client blat has exactly this functionality, but it does not seem to be able to connect to the exchange server to get the contacts needed. Could I maybe set outlook to send to blat which then does the breaking up as necessary? Can I (or is there a plug in for this) export only part of the contacts instead of all of them? Note that we send mail outside our organisation via our web host where we've got a few mailboxes, and we use our exchange (2000) server only internally, the few people that can send email to the outside world have an external mailbox as well as their exchange account defined. I might be able to convince our general boss that we can simply give (some) people the ability to send outside via exchange, but I might just as well not succeed. Alternatively, is there another program that can connect to exchange to get the contacts (selected based on categories) and then send via smtp in groups with delays between the mails?

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

  • unable to recover data from failed hdd

    - by Eslam Elyamany
    my hdd failing (or maybe totally dead) i've connected the hdd via USB but it doesn't appear in fdisk Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xe9fb38fb Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 40959999 20376576 7 HPFS/NTFS/exFAT /dev/sda4 40962046 976771071 467904513 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 82913280 86910975 1998848 82 Linux swap / Solaris /dev/sda6 86913024 394113023 153600000 7 HPFS/NTFS/exFAT /dev/sda7 40962048 82913279 20975616 83 Linux /dev/sda8 394122708 976768064 291322678+ 7 HPFS/NTFS/exFAT Partition 8 does not start on physical sector boundary. no sdc appears here , BUT it's appears on /dev/ rootghost-lap:/home/ghost# ls /dev/sd* /dev/sda /dev/sda2 /dev/sda5 /dev/sda8 /dev/sdb /dev/sdc1 /dev/sdc2 /dev/sdc6 /dev/sdc8 /dev/sda1 /dev/sda4 /dev/sda6 /dev/sda9 /dev/sdc /dev/sdc10 /dev/sdc5 /dev/sdc7 /dev/sdc9 also it appears in proc Code: rootghost-lap:/home/ghost# cat /proc/partitions major minor #blocks name 8 0 488386584 sda 8 1 102400 sda1 8 2 20376576 sda2 8 4 1 sda4 8 5 1998848 sda5 8 6 153600000 sda6 8 8 291322678 sda8 8 9 20975616 sda9 11 0 1048575 sr0 11 1 99136 sr1 8 32 244198583 sdc 8 33 14651248 sdc1 8 34 1 sdc2 8 37 15380480 sdc5 8 38 4153344 sdc6 8 39 48829536 sdc7 8 40 48829536 sdc8 8 41 110374551 sdc9 8 42 1975963 sdc10 and dmesg : [10604.777168] end_request: I/O error, dev sdc, sector 1 [10604.817238] sd 26:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [10604.817243] sd 26:0:0:0: [sdc] Sense Key : Aborted Command [current] [10604.817248] sd 26:0:0:0: [sdc] Add. Sense: No additional sense information [10604.817253] sd 26:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 02 00 00 06 00 ok now , let's see what i've tried testdisk to check for partitions -- failed dd to copy data from /dev/sdcX -- provide strange output size for example /dev/sdc1 is about 15G , the output for dd is 62G+ so i had to cancle it safecopy successfully made an image for partitons , but can't fix images, can't mount it, can't do any thing with it and some other tools i've tried and all failed , so any idea ?

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • installing wxGTK-devel on CentOS 5.4

    - by jackhab
    I'm trying to install wxGTK-devel on CentOS and since it's not in the base repo I added RPMForge. But now I'm getting these broken dependencies. I don't want start tampering with separate rpms because I suspect it will make thing worse. I remember installing this package from RPMForge without a problem several months ago. Please, advise. ... wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge)

    Read the article

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • Work from home on an iPad?

    - by Alex Basson
    The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic. Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things: Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop. Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway). I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files? Thanks for any suggestions!

    Read the article

  • Doing port forwarding and then using it from within the internal network

    - by Ram Rachum
    We all know that by doing port forwarding on the router, computers from outside the network are able, on the specified ports, to access internal computers by targeting the external IP. I'm now replacing a TP-Link router with a D-link VDSL N 6740U router, (and copied over all the settings,) and I've noticed that one thing stopped working: With the TP-link router, you could access those port-forwarded computers from within the network, using the external IP, and they would be forwarded to the relevant computers. With the new D-Link router, it doesn't work. You might be wondering, why would you want to use the external IP and port forwarding when you're inside the internal network anyway and can just access the internal IP? One example for why this is useful: You have an iPhone app that connects to a service on an internal computer. The iPhone app knows to connect to the external IP. When we put that iPhone inside the internal network (via WiFi), it suddenly stops working, because it can't access the service from the external IP anymore. Is it an inherent property of D-Link routers that they do not allow accessing internal servers from inside the network by targeting the external IP? Or is there a way to make it work?

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >