Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 59/465 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Data transfer speed to USB storage connected to wifi router very slow

    - by RonakG
    Here is my setup. A Linksys Cisco E3200 wifi router. A MacbookPro running OS X Lion 10.7.4. A Seagate GoFlex 1TB hard drive connected to wifi router via the USB port. When I try to transfer data from my MBP to the HDD, the data transfer rate is very low. I'm getting around 3MB/s write speed. This is very slow compared to the speed I get when HDD is directly connected to the MBP. The HDD is NTFS formatted. And the router provides access to HDD using Samba share. So I connect to the HDD using smb://. What is the limiting factor here affecting the data transfer rate?

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • error while loading shared libraries Dicom Store SCU / Echo SCU

    - by David Just
    I am running a dicom receiver on a Centos 6 box on top of a Xen server. If I attempt to send data to it from a remote server I get the following error: storescp: relocation error: /lib/libresolv.so.2: symbol memcpy, version GLIBC_2.0 not defined in file libc.so.6 with link time reference If I send data to the server locally it works, but sending to it from remote gives the above error. I do not think this is a problem that is specific to the storescp server.

    Read the article

  • Access node.js local server though mobile via same shared wifi

    - by laggingreflex
    EDIT: I was stuck in this situation before but then it was Apache-related But this time I'm using NodeJS, so the old answer doesn't help. I'm running apache a NodeJS webserver (on port 80) on Windows 7. I want to access the webserver through my mobile which shares the wifi router with my pc locally. http://localhost works from PC. But I can't access http://192.168.1.4 from either my phone or even my computer. ipconfig /all on my computer lists my ip address as 192.168.1.4 Wireless LAN adapter Wireless Network Connection: IPv4 Address. . . . . . . . . . . : 192.168.1.4(Preferred) I can ping my phone's (internal) ip address [192.168.1.5] from PC and vice-versa, I can ping my PC [192.168.1.4] from my phone. So why can't I access http://192.168.1.4 from my phone? (or PC) Firewall is off.

    Read the article

  • Storage of various linux config files

    - by stantona
    I'm using git to track/store all my various config files required for linux. They're organized as if they live in my home directory, eg: .Xresources .config/ Awesome rc.lua .xmodmap .zshrc vim/ <- submodule emacs/ <- submodule etc I use git submodules for other things like vim/emacs configuration (since I also want to keep those separate repos). I'm thinking of creating a shell script to create the various links to these files. The goal is to make it easier to setup another linux painlessly. Is this a reasonable idea? Is there a preferred approach? I'm mostly interested in hearing how others people store their configs.

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • Installing windows xp from a removable storage

    - by objectiveME
    Hello,i want to install windows xp from my flash disk but upon running DISKPART and LIST PART my flash disk is not being listed.I soon find out its not supported on another forum.I tried unet bootin but i dont want to go that route.I have learn't that i can make my flash disk appear a hard drive but i am not sure since i have never done this before. I am following this tutorial: http://www.intowindows.com/how-to-install-windows-7-rc-on-acer-aspire-one-netbook/ If i present a usb stick as a hard drive,will i be able to install xp now?.

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • (Windows 7) Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

  • Running RAID on an internal storage drive

    - by Johnny W
    I am running Windows 8 on an SSD, and it's all running swimmingly, but I want to put my documents on a "normal" HD running under RAID 1. I have four SATA 3GB/s ports on my motherboard (my Windows 8 SSD drive in on a different 6GB/s controller). All four are used (1 Bluray Optical Drive, 1 Spare HD, and the 2 I wish to turn into a RAID 1 drive. In my BIOS I can only change settings for the entire controller, not just ports. So my question is: If I turn these four ports into a RAID controller, will that negatively affect the non-RAID hardware plugged into it? I.e. Will a HD or Bluray drive be slower/incompatible with being plugged into a non AHCI SATA port? Thanks.

    Read the article

  • Looking for something to monitor an Internet Shared Connection in Windows XP

    - by thenewbie
    If someone knows a software with these features: Works from the tray area Very lightweight Works also in Windows XP (preferably workstation, nothing serverish required) Can show a bandwidth usage graph, where I can easily see each PC's usage I can inspect a computer's connections, find out the ports and applications that are using them (eg, all bandwidth is consumed by PC #3, that's running uTorrent.exe from port 33003) ...I'd happily check it out. I'm only sharing my connection with (at most) 4 computers, I'd prefer not having to add hardware or reinstall my system. Thanks in advance!

    Read the article

  • Linux data storage and partitioning

    - by Rajeev
    In the following output of df -h you can see that i have added a new hard drive(/dev/hdd1) and have mounted as /hdd1. My question is if I start dumping data to /opt will that data be mounted in /hdd1 or / My goal is to utilise the new hdd1 instead of old disk(/dev/sda3). How can this be done? Filesystem Size Used Avail Use% Mounted on /dev/sda3 442G 312G 12G 86% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 57M 128M 31% /boot /dev/sdb1 1.7T 201M 2.6T 1% /hdd1

    Read the article

  • Grouping data columns by shared values

    - by Lenna
    I don't know how to properly describe what I need to do, so I will give an example. A colleague has a data set in Excel like so: Col A Col B Col C aaaaa aaaaa bbbbb bbbbb ccccc ccccc ccccc ddddd eeeee The end result should be something like this: Col A Col B Col C aaaaa aaaaa bbbbb bbbbb ccccc ccccc ccccc ddddd eeeee Or even: Col A Col B Col C aaaaa Yes Yes No bbbbb Yes No Yes etc. (if it helps, the columns are protein extraction methods and the letters are protein IDs - we need to determine which proteins are extracted by which methods) My colleague is doing this by hand, but there is enough data that it would be really helpful to automate it. Is there a formula in Excel to do this automatically?

    Read the article

  • Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

  • How to reliably mount a shared folder /volume/folder at boot up

    - by Tanmay
    Following is my sample.sh in /usr/local/bin/ #!/bin/sh mkdir -p /Volumes/folder mount -t afp -o rw afp://user:password@server_name/folder_name /Volumes/folder Following is my com.apple.sample.plist in /Library/LaunchAgents/ ?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.apple.sample</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/sample.sh</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> Where as when I am able to run sample.sh independently and is working fine. Also I have tried using launchd.conf as mkdir -p /Volumes/folder mount -t afp -o rw afp://user:[email protected]/testsuites /Volumes/folder Still not working.

    Read the article

  • E2K7: Can't add jounaling mailbox to storage group

    - by Agent
    We just added a new SG to a standalone E2K7 server but Exchange pops an error when we try to enable journaling on it. The journal recipient mailbox was created a while back and as far as we can tell there are no issues with it. It's viewable in the GAL and accessible through OWA. The error is: Set-Mailboxdatabase Error:Obkect "domain.company.com/Journaling/Journal5" could not be found. Please make sure that it was spelled correctly or specify a different object. This Journal5 account is in a child domain (like all the other journaling mailboxes we have attached to other mailbox server SGs). We also tried attaching one of the working journaling mailboxes to this SG but they also popped back the same error. Event log is now showing any errors at all. We are running SP1 on this server. I've tried dismounting/remounting the store and bouncing the IS and SA services but that didn't help any. Any suggestions?

    Read the article

  • centos server shared files

    - by Kyle Hudson
    Hi, I am wanting to implement a multi-server specialised hosting environment. I currently have a cloud solution comprising of 3 centos boxes (2 lamp web servers, 1 mysql). What I am wanting to do is, implement a 5 server solution where they is 3 web servers, 1 mysql box and a fileshare. Basically I want the fileshare to host all the web files for the servers, the caching will remain on the individual servers and the sessions will be stored in mysql. So what I am asking is how do I map the servers to share the same "docroot"? Is it NFS? if so whats the best way about doing this? Thanks in advance.

    Read the article

  • Password History Storage and Variability Comparison

    - by z3ke
    I believe this situation would be similar to many others out there, so maybe some of you can shed some light... Supposedly, when making password changes through MS exchange every 90 days, you cannot use any simple variation of one of your old passwords, up to whatever limit the admin's set for a system. My question: If your previous passwords are only stored as hashes, how can they check for the "just changed one letter" case. Wouldn't they have to have access to the old plain-text passwords in order to make those comparisons? The only other thing I can think of is if upon original creation of a password, they also stored all other one character permutations of it, so that they can be banned later?

    Read the article

  • Updating shared files across computers

    - by murgatroid99
    I have a file server running Windows Server 2008 and a couple of laptops running Windows 7 on a network. There are a large number of files that all users will need access to. My plan is to have the files on both the server and the laptops because the users will need to access the files in places with no Internet access. I also want any changes made to the files on any of the laptops to propagate to the server and then propagate to the other laptops whenever they connect to the network. Should I do this with a scheduled batch script with a few xcopy commands or is there a better way to do it?

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Installing multiple versions of a shared library

    - by nsfyn55
    I am running ubuntu 10.04 and I want to use tmux 1.6. tmux has a dependency on libevent 2. My solution was to compile libevent2 and drop into /usr/local/lib then compile tmux against this lib and drop into /usr/local/bin. This works great until...I restart. This is just an assumption on my part but it seems that other binaries are now linking to the libevent2 library presumably because its on the library path. Because there are 60+ packages with libevent1 dependencies this causes my install to basically lose its mind. Is there an idiomatic way to approach running an application that has a core library dependency on a different version? Should I just statically link the lib?

    Read the article

  • Unix / linux permissions setup for shared hosting with Apache

    - by weiyin
    I'm in the process of setting up a server from a clean CentOS 5 install. What is the best permission structure (users, groups, unix permissions) for running a single instance of apache for multiple users? Ideally, it should satisfy these requirements: Each user's websites are stored in a subdirectory of their home directory. Users can edit files and permissions. Apache can read the websites of all users. No user can read the website files of other users. Bonus question: how to add PHP and/or Perl and/or Ruby to Apache without allowing any users to access any other user's files?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >