Search Results

Search found 700 results on 28 pages for 'nas'.

Page 17/28 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • do.call(rbind, list) for uneven number of column

    - by h.l.m
    I have a list, with each element being a character vector, of differing lengths I would like to bind the data as rows, so that the column names 'line up' and if there is extra data then create column and if there is missing data then create NAs Below is a mock example of the data I am working with x <- list() x[[1]] <- letters[seq(2,20,by=2)] names(x[[1]]) <- LETTERS[c(1:length(x[[1]]))] x[[2]] <- letters[seq(3,20, by=3)] names(x[[2]]) <- LETTERS[seq(3,20, by=3)] x[[3]] <- letters[seq(4,20, by=4)] names(x[[3]]) <- LETTERS[seq(4,20, by=4)] The below line would normally be what I would do if I was sure that the format for each element was the same... do.call(rbind,x) I was hoping that someone had come up with a nice little solution that matches up the column names and fills in blanks with NAs whilst adding new columns if in the binding process new columns are found...

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Ceský Oracle se stehuje

    - by david.krch
    Pro prípad, že se k nám chystáte na nejaký seminár, nebo schuzku, bude se vám hodit informace, že jsme se na konci minulého týdne rozloucili s naší budovou kousek od Václavského námestí a ode dneška nás najdete u metra Chodov na adrese: V Parku 2308/8 148 00 Praha 11 Mapa Pokud vám tato adresa prijde povedomá, hádáte správne - pristehovali jsme se k našim novým kolegum puvodne pracujícím pod hlavickou Sun Microsystems.

    Read the article

  • Caching no .NET Framework 4.0

    - by anobre
    Olá pessoal, como estão? Hoje vou apresentar uma mudança interessante sobre caching, em comparação com versões anteriores. Introdução A versão 4.0 da plataforma .NET trouxe uma mudança estrutural esperada para os recursos de Cache. Nas versão 3.5 (até SP1), a plataforma fornecia uma implementação do Cache através do namespace System.Web.Caching. Nas versões anteriores o cache estava disponível no namespace System.Web, o que criada uma dependência com as classes do ASP.NET. Neste novo framework, o namespace System.Runtime.Caching reúne toda a API necessária para criar todas as tarefas comuns ao ASP.NET Caching de versões anteriores. System.Runtime.Caching e MemoryCache Tudo que precisamos para trabalhar com cache, em aplicações Web ou não, está reunido no namespace System.Runtime.Caching. A unidade básica de trabalho é a classe abstrata ObjectCache, que fornece a base para criar implementações customizadas de cache. E como é de se esperar, a classe MemoryCache é a implementação da classe abstrata ObjectCache para armazenamento das informações em memória. public class MemoryCache : ObjectCache, IEnumerable, IDisposable A utilização do cache é muito simples, bem parecida com o modelo anterior: ObjectCache cache = MemoryCache.Default; string fileContents = cache["filecontents"] as string; if (fileContents == null) { CacheItemPolicy policy = new CacheItemPolicy(); List<string> filePaths = new List<string>(); filePaths.Add("c:\\cache\\example.txt"); policy.ChangeMonitors.Add(new HostFileChangeMonitor(filePaths)); // Fetch the file contents. fileContents = File.ReadAllText("c:\\cache\\example.txt"); cache.Set("filecontents", fileContents, policy); } Label1.Text = fileContents; Extendendo o Cache É possível customizar todo mecanismo de cache através de várias abordagens. ScottGu escreveu sobre isto, que você pode acessar através deste link. Conclusão Algo muito esperado em versões anteriores, finalmente o cache está disponível sem criar relacionamento com assemblies exclusivamente Web. Perfeito para quem desenvolve outros tipos de aplicação, usufruindo deste recurso sem carregar código desnecessário. Abraços!

    Read the article

  • Popcorn Hour C-200 review

    <b>Linux User & Developer: </b>"Besides being a HDD player and a full gigabit ethernet network streaming NAS box, it's also a media server (including Samba, NFS, UpnP, Bonjour and myiHome) and plays host to the MSP Portal, not to mention other third-party media server apps."

    Read the article

  • Partner Training: Oracle Endeca Information Discovery 3.1

    - by Roxana Babiciu
    Please join us on November 19 (NAS, LAD) or November 20 (EMEA) to learn about the new release of Oracle Endeca Information Discovery 3.1. The training covers new and exciting self-service discovery capabilities for business users for unstructured analytics, a live demo, and Q&A with the Product Management team. OEID 3.1 offers unique sales opportunities to all Endeca, BI, and Data Warehousing partners. Don’t miss!

    Read the article

  • Showing ZFS some LOVE

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} L is for the way you look at us, and O because we’re Oracle, but V is very, very, extra ordinary, and E, well that’s obvious… E is because Oracle’s new Sun ZFS Storage Appliance is Excellent, and here at OPN, we like spell out the obvious!  If you haven’t already heard, the Sun ZFS Appliance has “A simple, GUI-driven setup and configuration, solid price-performance and world-class Oracle support behind it. The CRN Test Center recommends the Sun ZFS Storage”. Read more about what CRN said here. Oracle's Sun ZFS Appliance family delivers enterprise-class network attached storage (NAS) capabilities with leading Oracle integration, simplicity, efficiency, performance, and TCO.  The systems offer an easy way to manage and expand your storage environment at a lower cost, with more efficiency, better data integrity, and higher performance when compared with competitive NAS offerings. Did we mention that set up, including configuring, will take you less than an hour since it all comes in one box and is so darn simple to use? So if you L-O-V-E what you’re hearing about Oracle’s Sun Z-F-S, learn more by watching the video below, and visiting any of our available resources . It Had to Be You, The OPN Communications Team

    Read the article

  • Transmission web client: strange charasters in file names

    - by wizard
    I have nas: Operating system: Ubuntu Linux 12.04.1 Kernel and CPU: Linux 3.2.0-34-generic on x86_64 Transmission 2.51 (13280) On all operating system (browser Chrome) web client Transmission in file names after point of becoming a symbol "&#8203 ;" (without space) "The.&#8203 ;Big.&#8203 ;Bang.&#8203 ;Theory.&#8203 ;S06E05.&#8203 ;720p.&#8203 ;WEB-DL??.&#8203 ;Rus.&#8203 ;Eng.&#8203 ;mkv 810.7 MB of 810.7 MB (100%)" (without space) How to remove these characters?

    Read the article

  • Telnet does not give a response

    - by floorish
    Some wireless access points are acting a little weird, so I want to reboot them every couple of hours. Luckily there exists a security flaw which lets me login as root through telnet when using port 1111 (without username and password). Now I want to use that to let my QNAP NAS execute the reboot command through telnet every now and then. The problem is however that that telnet version doesn't give any response if I connect to the AP. The telnet I use on OSX works just fine but the one on the NAS not. BusyBox v1.01 (2012.06.14-18:35+0000) multi-call binary Usage: telnet [-a] [-l USER] HOST [PORT] When I execute telnet <HOST> 1111 nothing happens. I can send the escape character ^] which gives me the following options: Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet The only way to get some commands executed is by suspending telnet with z followed by some random command which isn't recognized. Then the prompt shows this: # telnet 192.168.1.5 1111 ^] Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet z continuing... asdf Illegal command. 00> After that I am able to communicate with the AP, but when I exit the telnet session and try the same again, the AP refuses to connect at all and it must be manually rebooted (looks like the telnet session isn't shut down properly on the AP). So the question is what commands should I execute in order to communicate with the AP using the Busybox telnet version of the QNAP? (No, can't use ssh unfortunately)

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Recover mysql database - mysqldump gives "table <tablename> doesn't exist (1146)"

    - by Matthew
    Backstory Ubuntu died (wouldn't boot) and I couldn't fix it. I booted a live cd to recover the important stuff and saved it to my NAS. One of the things I backed up was /var/lib/mysql. Reinstalled with Linux Mint because I was on Ubuntu 10.0.4 this was a good opportunity to try a new distro (and I don't like Unity). Now I want to recover my old mediawiki, so I shut down mysql daemon, cp -R /media/NAS/Backup/mysql/mediawiki@002d1_19_1 /var/lib/mysql/, set file ownership and permissions correctly, and start mysql back up. Problem Now I'm trying to export the database so I can restore the database, but when I execute the mysqldump I get an error: $ mysqldump -u mediawikiuser -p mediawiki-1_19_1 -c | gzip -9 > wiki.2012-11-15.sql.gz Enter password: mysqldump: Got error: 1146: Table 'mediawiki-1_19_1.archive' doesn't exist when using LOCK TABLES Things I've tried I tried using --skip-lock-tables but I get this: Error: Couldn't read status information for table archive () mysqldump: Couldn't execute 'show create table `archive`': Table 'mediawiki-1_19_1.archive' doesn't exist (1146) I tried logging in to mysql and I can list the tables that should be there, but trying to describe or select from them errors out the same way as the dump: mysql> show tables; +----------------------------+ | Tables_in_mediawiki-1_19_1 | +----------------------------+ | archive | | category | | categorylinks | ... | user_properties | | valid_tag | | watchlist | +----------------------------+ 49 rows in set (0.00 sec) mysql> describe archive; ERROR 1146 (42S02): Table 'mediawiki-1_19_1.archive' doesn't exist I believe mediawiki was installed using innodb and binary data. Am I screwed or is there a way to recover this?

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Password Authentication Fails - NTLMv2

    - by JMeterX
    Environment: Windows 2000 sp4 EDIT: Domain Controller with no trust setup with the Win2008 Server Windows XP machines Windows 2008 Server Netapp NAS Problem: We have a shared folder that resides on a NAS using a Windows 2008 AD for the authentication with the proper permissions setup. When the Windows 2000 machine tries to open the share residing on the Win2008 machine, it is prompted for a username and password. Upon entering the credentials it continuously re-asks for credentials. Important Details: The Windows 2000 machine can ping both the XP machines and the Windows 2008 Server The Windows 2008 machine is mandated to only use NTLMv2 The Windows 2000 machine was originally set to NTLM but was recently switched to NTLMv2 if negotiated for the purpose of trying to connect to the share. As I am sure it will come up, we are using Windows 2000 because of contractual obligations Questions: Why is password Authentication failing in this case? After setting a GPO for the Win2000 machine for it to use NTLMv2, do we need to reboot the machine for the changes to take affect? We used SECEDIT to update the GPOs without rebooting. UPDATE We checked both of the 2008 Domain Controllers to find an error code. We received: Microsoft_Auth_Package_V1_0 0xc000006a Event ID: 4776 I know this to be an authentication error via THIS article "The value provided as the current password is not correct" We know this password to be correct, but since these two domains (Win2000 & Win2008) do not have a trust setup what authentication account needs to be used? One that resides on the Win2000 hosted domain?

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >