Search Results

Search found 716 results on 29 pages for 'gokhan nas'.

Page 12/29 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to prevent WD MyBook Live Duo from going to sleep?

    - by Chris Pietschmann
    I have an issue of my WD MyBook Live Duo going into some kind of low power standby mode when it hasn't been accessed for awhile. When I try accessing the drive it takes a minute or two to become responsive. When this happens Windows Explorer says the drive is unavailable. If I'm accessing the drive regularly throughout the day then this issue doesn't occur. Does anyone know of a way to make the drive more readily accessible on demand?

    Read the article

  • Replacing hard drives in a LaCie 2big Network

    - by Jason
    I have a LaCie 2big Network that currently has 2 500GB drives in it (mirror). I'd like to upgrade the drives to 1TB each using something like this I know that Lacie sells a 1TB drive designed for the 2big Network but it would seem to me that these drives are standard drives with the Lacie holder included. Do I need to use their drives or can I get my own? (Their customer support pushes me towards their drives) I'm assuming the device can format the drives for me when I add them in.

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • How do I backup (from Vista Home Premium) to a FAT32 HDD connected to a wireless router with no user account or password set?

    - by Scubadooper
    I have a Seagate expansion 2TB HDD which I've managed to get working on my wireless router (thanks to fat32format) however I haven't managed to set my backup to work with it. Vist Home Premium requires that I input a username and password in the backup utility. As far as I'm aware that type of access control isn't set on the HDD: Can I set up the access control on the HDD? If so how Or, can I set the backup to work without it? If so how Thanks for your help, I haven't been able to find the answer anywhere on the net so far

    Read the article

  • Get positions for NAs only in the "middle" of a matrix column

    - by Abiel
    I want to obtain an index that refers to the positions of NA values in a matrix where the index is true if a given cell is NA and there is at least one non-NA value before and after it in the column. For example, given the following matrix [,1] [,2] [,3] [,4] [1,] NA 1 NA 1 [2,] 1 NA NA 2 [3,] NA 2 NA 3 the only value of the index that comes back TRUE should be [2,2]. Is there a compact expression for what I want to do? If I had to I could loop through columns and use something like min(which(!is.na(x[,i]))) to find the first non-NA value in each column, and then set all values before that to FALSE (and the same for all values after the max). This way I would not select leading and trailing NA values. But this seems a bit messy, so I'm wondering if there is a cleaner expression that does this without loops.

    Read the article

  • Best way to reduce consecutive NAs to single NA

    - by digEmAll
    I need to reduce the consecutive NA's in a vector to a single NA, without touching the other values. So, for example, given a vector like this: NA NA 8 7 NA NA NA NA NA 3 3 NA -1 4 what I need to get, is the following result: NA 8 7 NA 3 3 NA -1 4 Currently, I'm using the following function: reduceConsecutiveNA2One <- function(vect){ enc <- rle(is.na(vect)) # helper func tmpFun <- function(i){ if(enc$values[i]){ data.frame(L=c(enc$lengths[i]-1, 1), V=c(TRUE,FALSE)) }else{ data.frame(L=enc$lengths[i], V=enc$values[i]) } } Df <- do.call(rbind.data.frame,lapply(1:length(enc$lengths),FUN=tmpFun)) return(vect[rep.int(!Df$V,Df$L)]) } and it seems to work fine, but probably there's a simpler/faster way to accomplish this task. Any suggestions ? Thanks in advance.

    Read the article

  • How do I integrate a OpenSolaris NAS with AD?

    - by Neo
    I basically want a OpenSolaris NAS (ZFS goodies) but I'd like to integrate it with AD, so that when I create a new user in AD, his roaming profile is created in the NAS. That means all his ACLs have to work (I know they're compatible), etc. The tutorials I found don't actually work, so any help would be much appreciated.

    Read the article

  • Issue with Netgear GS108T Managed Switch and Jumbo Frames

    - by Richie086
    I recently purchased a Netgear GS108T managed switch and I am trying to configure jumbo packets between my NAS (Thecus N4100Pro), PC and managed switch. I should mention the fact that I was able to use jumbo frames between my PC and NAS before I purchased the switch without issue. My Desktop has a wired gigabit NIC (Intel 82579V Gigabit) and has the ability to configure jumbo frames (see pic) that are either 9014 bytes or 4088 bytes. I choose 9014 bytes for the jumbo frame size My NAS supports jumbo frames as well, and is configured to use 9014 as the frame size. When I go into my Netgear managed switch and set the frame size to 9014 on the ports I am using for my PC and NAS. See image As soon as I hit apply in the web interface, I loose my connection to the SMB shares on my NAS and I can no longer connect to the web admin interface for my NAS. The really strange thing is I can ping my NAS via the ping command, but when I try to connect to the web interface on port 80 or port 443 the page never loads. I did a scan from my PC to my NAS using nmap and I can see the following ports open PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 631/tcp open ipp 2000/tcp open cisco-sccp 2049/tcp open nfs 3260/tcp open iscsi 49152/tcp open unknown MAC Address: 00:14:FD:15:00:44 (Thecus Technology) Read data files from: C:\Program Files (x86)\Nmap Nmap done: 1 IP address (1 host up) scanned in 211.97 seconds Raw packets sent: 1 (28B) | Rcvd: 1 (28B) Anyone have any idea what is going on here? Why is nmap able to detect the ports are open and listening for http, https and file sharing but I cant connect when all devices have jumbo packets enabled? Stranger still - I did a packet capture using wireshark while the nmap scan was running and filtered so I only saw converstations between my PC and my NAS. Here are the packet details from my scan Only 4 packets over 5k bytes? What is going on here? Do I not need to configure jumbo frame sizes on the switch? I have an internet connection from my pc to the switch to my router - I just cannot connect to my NAS. I just checked on my iPhone and I am able to open my NAS web admin interface without issue on my iPhone! WTF!!!!!! Let me know if you need more details..

    Read the article

  • How to set up Ubuntu Server as a NAS?

    - by rifferte
    I am looking to set up Ubuntu Server as a headless NAS for my home. I would like to have file storage there, as well as a central hub for my MP3s and pictures. What are the best packages out there to handle this? Can someone post a link to a good tutorial or post some tips? One constraint I have is that it has to be Windows 7 friendly. By that I mean the shares and streaming should work for a Windows machine.

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • How to replicate wwwroot over multiple IIS web servers with NLB without NAS?

    - by Igor K
    We're adding a second server with Windows NLB for a bit of redundancy (ie the power goes on one of the servers - I know its not the best solution). How can we keep the data identical between the servers? Dont want to use a SAN or NAS as thats just something else to go wrong. Customers can upload images with the web app so changes could be made on either server, as well as us uploaded a few changed files. Thanks

    Read the article

  • Is it possible to choose the Amazon Glacier Vault when backing up from Synology NAS?

    - by Jorrit Reedijk
    I am using Amazon Glacier Backup on a Synology NAS (rackstation). But the Synology backup software for glacier does not allow me to choose the vault to backup to. This means I can not give my own name to the vault, but also I can not backup to a previously created vault (I also have this problem now after replacing a Synology rackstation). Is there any way to choose to which vault you want your Synology to backup to? Edit: I have now posted this question to the Synology forum also: http://forum.synology.com/enu/viewtopic.php?f=174&t=86557&e=0 If I can solve the problem I'll post the solutions here also.

    Read the article

  • Best way to synchronise photos to three machines. Laptop, Desktop, NAS

    - by wookiebreath
    I'm using Picasa as my photo management software, and I have a collection of photos that gets downloaded from my cameras either onto my Desktop or onto my Laptop. I'd like to automatically have copies of all my photos on both my laptop, desktop and my NAS. Does anyone else do this? Do you have any recommendations for Software or processes? Is there anything I need to be careful of? I had a look at Dropbox, but it appears to have a 2 gig limit? What about something like SyncBack?

    Read the article

  • Best way to synchronise photos to three machines. Laptop, Desktop, NAS

    - by user9632
    I'm using Picasa as my photo management software, and I have a collection of photos that gets downloaded from my cameras either onto my Desktop or onto my Laptop. I'd like to automatically have copies of all my photos on both my laptop, desktop and my NAS. Does anyone else do this? Do you have any recommendations for Software or processes? Is there anything I need to be careful of? I had a look at Dropbox, but it appears to have a 2 gig limit? What about something like SyncBack?

    Read the article

  • What is the correct way to distinguish services to which RADIUS request belongs?

    - by John
    If I have 1 generic RADIUS server and a number of clients (VoIP server, VPN server, WIFI access point) how to distinguish senders (NAS) of access and accounting requests? I didn't found anything helpful by reading all related RFCs, there just NAS-IP-Addess or NAS-Identifier is required, but this attributes usually represents IP address of the NAS and this address can not be used as a robust identifier, because it's often possible to fall into the case when 2 different NASes are running on the same server and this attributes can not be customized. Any ideas?

    Read the article

  • Mount SMB / AFP 13.10

    - by Jeffery
    I cannot seem to get Ubuntu to mount a mac share via SMB or AFP. I've tried the following... AFP: apt-get install afpfs-ng-utils mount_afp afp://user:password@localip/share /mnt/share Error given: "Could not connect, never got a reponse to getstatus, Connection timed out". Which is odd as I can access the share just fine via Mac. SMB: apt-get install cifs-utils nano /etc/fstab added the following line "//localip/share /mnt/share cifs username=user,password=pass,iocharset=utf8,sec=nltm 0 0" mount -a Error given: root@Asrock:~# mount -a -vvv mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "//10.0.1.3/NAS" mount: node: "/mnt/NAS" mount: types: "cifs" mount: opts: "username=user,password=pass,iocharset=utf8,sec=nltm" mount: external mount: argv[0] = "/sbin/mount.cifs" mount: external mount: argv[1] = "//10.0.1.3/NAS" mount: external mount: argv[2] = "/mnt/NAS" mount: external mount: argv[3] = "-v" mount: external mount: argv[4] = "-o" mount: external mount: argv[5] = "rw,username=user,password=pass,iocharset=utf8,sec=nltm" mount.cifs kernel mount options: ip=10.0.1.3,unc=\\10.0.1.3\NAS,iocharset=utf8,sec=nltm,user=user,pass=* mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I don't really care which it uses I just want it to work! Am I doing something wrong?

    Read the article

  • Upgrading my home network to Gigabit Ethernet and Wireless-N turns out slower than before

    - by Raheel Khan
    My home network has three desktops, three laptops and some NAS drives. All desktops and NAS drives support Gigabit LAN and all laptops support Wireless-N. I was running a 100 BaseT switch though. I recently purchased a Gigabit Ethernet Switch and an Wireless-N ADSL Modem-Router. After upgrading, I noticed that the wireless file transfer speeds from laptop-to-NAS and vice versa became terribly slow. Possibly even slower than before the upgrade. The transfer speeds from desktop-to-NAS (wired) have improved though. As an example, copying a 50GB file from laptop-to-NAS was estimated at 15 hours! Is there something I can do to improve this? Also, should I consider buying a dedicated wireless access point for speed rather than using the Wireless modem-router?

    Read the article

  • Windows 7 can ping but can't see device on other subnet

    - by user192702
    I have 2 Windows 7 on 2 different subnets but 1 of them is unable to reach a NAS. The topology is as follow. Any idea why this is the case? Is there some Windows settings I need to apply? Subnet 1 - PC 1 - NAS Subnet 2 - PC 2 PC 1 is able to do the following: - Load the admin page on the browser. - Show the NAS under Windows Explorer - Network. - Access the NAS when typed in \\ in Windows Explorer. PC 2 is unable to do any of the above 3. It can however ping the NAS and get a response.

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >