Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 123/276 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • Windows 2008 as home file server and more

    - by Christian W
    I currently have a freenas-unit as a NAS, and a Win2k8R2-unit as server. However I would like to consolidate these to units in one. What I really like about the freenas-unit is the ZFS filesystem. And the only reason I care about the ZFS filesystem is the easy way I can grow an existing filesystem just by inserting a new drive. How would this work in Win2k8? If I setup my unit with a separate drive as C: and a 1TB drive as D:. The D: would then be segmented into d:\Videos d:\Music d:\Pictures. When everything gets close to filling the storage-drive, I would like to expand the storage, but I wouldn't want to have E:\Videos or d:\Videos2 (using the NTFS folder mount thingy). I still want all my Videos to reside in D:\Videos and I want the OS to decide which drive it's going to be stored on... Some kind of on-the-fly jbod expansion :) Is this at all possible in Windows 2008?

    Read the article

  • Linux stretch cluster: MD replication, DRBD or Veritas?

    - by PieterB
    For the moment there's a lot of choices for setting up a Linux cluster. For cluster manager: you can use Red Hat Cluster manager, Pacemaker or Veritas Cluster Server. The first one has the most momentum, the second one comes by default with RH subscriptions and the last one is very expensive and has a very good reputation ;-) For storage: - You can replicate LUN's using software raid / md device - You can use the network using DRBD replication, which offers a bit more flexibility - You can use Veritas Storage Foundation technology to talk to your SANs replication technology. Anyone has any recommandations or experience with these technologies?

    Read the article

  • editing automated SharePoint emails

    - by Richard Collins
    Sharepoint sends out an automated email when a site collection has reached its quota warning level. The email says: You are receiving this e-mail message because you are an administrator of the following SharePoint Web site, which has exceeded the warning level for storage: https://mysite.xxx.ac.uk/personal/xxx/. To see how much storage is being taken up by this site, go to the View site collection usage summary: https://mysite.xxx.ac.uk/personal/xxx/_layouts/Usage.aspx. The problem is that the usage summary link needs to point somewhere else instead. How can I edit the body of the email to change the link? Thanks.

    Read the article

  • RAID 10: SPAN 2 vs SPAN 4

    - by LaDante Riley
    I am currently configuring RAID 10 (first time doing RAID ever) for a server at work. In the Configuration Utility. I am given the option of either span 2 or span 4. Having never done this before, I was curious if someone could tell me the pros and cons of for each span? Thanks The server is a Poweredge r620 with a PERC H710 mini (Security Capable) RAID controller. I have 8 600GB hard drives. I am creating this server as a network storage drive. I have SQL server historian database whose 1TB storage filled up and after 5 years of logging data.

    Read the article

  • Linux stretch cluster: MD replication, DRBD or Veritas?

    - by PieterB
    For the moment there's a lot of choices for setting up a Linux cluster. For cluster manager: you can use Red Hat Cluster manager, Pacemaker or Veritas Cluster Server. The first one has the most momentum, the second one comes by default with RH subscriptions and the last one is very expensive and has a very good reputation ;-) For storage: - You can replicate LUN's using software raid / md device - You can use the network using DRBD replication, which offers a bit more flexibility - You can use Veritas Storage Foundation technology to talk to your SANs replication technology. Anyone has any recommandations or experience with these technologies?

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • usb device in dual mode on gentoo linux

    - by Idlecool
    i am having a flip flop usb modem which has two modes 1 usb mass storage mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fff5 ONDA Communication S.p.A. 2 usbserial mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fffe ONDA Communication S.p.A. by default whenever i plug the modem to the usb port.. the linux machine recognize it as a usb mass storage device.. how can i make it load as usbserial device i have been using a package usb_modeswitch in the past on ubuntu 10.04 but i cannt install the same package on gentoo live cd.. even udev is not installed on live cd.. how to change the product-id of the usb device on gentoo live disc without udev.

    Read the article

  • XenServer migrate machines between hosts

    - by Hubert Kario
    I have a XenServer 5.6 Free setup with 5 VMs (Windows and Linux) using about 1.5TB of directly attached storage. Because our virtualisation needs have grown a bit, we currently are preparing a faster XenServer 6.0 Free machine with more RAM and a more storage. Again, directly attached disks. How can I migrate the VMs between XenServer machines? I don't need to keep the machines up and running during migration, but using VM export and import would definitely take too long. Would making a VM with the same configuration on new host and dd'ing the LVM volume over network be the only quick and least painful solution? Are there any "gotchas" I should look out for when doing something like this? The old machine has an AMD Phenom II, the new has Intel Xeon E5 CPUs.

    Read the article

  • What is the safest and least expensive way to store 10 terabytes of data?

    - by Josh T
    I'm a member of a production company and we're preparing for our first feature film. We've been discussing methods of data storage to keep all of our original content safe (for as long as possible). While we understand data is never 100% safe, we'd like to find the safest solution for us. We've considered: 16TB NAS for on-site storage 4-5 2TB hard drives (cheap, but not redundant), copy original footage to drives then seal in static free bag Burn data to Blu-Ray disks (time consuming and expensive: 200 disks == $5000) Tape drive(s)? I know the least about tape drives, except the fact that they're more reliable than disks. Any experience/knowledge with this amount of data is hugely appreciated.

    Read the article

  • Mounting Gluster Volumes

    - by Roman Newaza
    I have created Hosted Zone with 2 IP addresses of Gluster Cluster, both IP are returned by dig. After mounting Gluster, I cannot ls mount point as it takes long time. mount shows me it's mounted, but df doesn't. Finally, I have this: ls: cannot access /mnt/storage: Transport endpoint is not connected. But if I mount it with the one of the IP, no problem - volume contents is accessible OS: Ubuntu 11.10 GlusterFS: 3.2.6 Log: http://pastie.org/private/2jgp4h1hnqgzych3djtg I have can telnet storage from client - ports are open.

    Read the article

  • Software/FakeRAID: Windows 8 Disk Mirroring vs Intel Onboard

    - by Johnny W
    So Windows 8 is out and I have a new motherboard. I wish to create a RAID 1 coupling between two HDDs -- for storage purposes only (my OS is on an SSD) -- but I don't know which is the best route to take. My motherboard (Z77 chipset) comes with the age old Intel Fake RAID, but since I only wish to use my RAID for storage, I wondered if I might be better to use Windows 8 Disk Mirroring. Can anyone advise which is better? Or perhaps the pros and cons of each, if that's too contentious? I just can't see the benefit of FakeRAID. You can see my current setup here, if that might change things(?): Thanks!

    Read the article

  • ASP.NET Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • Can't access individual samba shares

    - by Richard Maddis
    I've just installed CentOS and I'm configuring Samba. I have a share with the following in the smb.conf file: [storage] comment = Main storage for all use path = /share public = yes browseable = yes writable = yes printable = no write list = bob root create mask = 0775 guest ok = yes available = yes In Windows Explorer, I can reach the page listing all the shares on the server, but I click on the shares themselves, I get an error saying that the folder cannot be found. I have verified that the folder /share exists and I've also given it 777 permissions so it cannot be due to permissions. What is causing this? I can post more config files if necessary.

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • Do I have to chmod 777 my NFS folder when I share?

    - by luckytaxi
    Under Redhat, if I export a folder as an NFS mount, does the folder have to have RW for users/groups/others? Right now /storage/software is -rwxr-xr-x root/root i.e. /etc/exportfs /storage/software *(rw,sync) On my client, I can mount but I can't write. I'm using a regular user and NOT root. I think "no_root_squash" fixes it but I really don't want that. Then again, nor do I want to have to chmod 777 the folder on the server.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • "Always available offline" option missing on one network drive in Windows 7

    - by Rynardt
    My network setup at home has 2 network storage devices on the network. The one is for media content on a Popcorn Hour A-110 and the other is a D-Link DNS-320 in RAID 1 configuration for business files. When I access these network drives and right click a folder the following context menu appears for the A-110 device, but not for the D-link. I have tested this in both Windows 7 32bit and Windows Vista 64bit. In both instances the "Always available offline" option is only available for the A-110 storage device, and not for the D-link. How do I get this option for the D-link? Any advice or ideas are welcome.

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • Best Solution For My Requirements

    - by Eray
    Hello, I'm a web developer. I have a few small online web application and a few Wordpress blogs. But i don't have too much experience with installing / configuring web servers . One of my web application needs cron jobs. It will check a lot of web sites availability. And, this application will leech too much RAM. And i think shared-hosting isn't suitable for this. But 1GB storage is enough i think. I don't need too much storage for my web sites. What do you think ? Which hosting solution is more suitable for my requirements ? Reseller ? VPS ? Cloud Server ? etc ...

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >