Search Results

Search found 22877 results on 916 pages for 'network storage'.

Page 32/916 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • monitor network bandwidth via ssh

    - by ServerSideX
    I'm running a Centos 6.4 server with cPanel. WHM (admin side panel) shows about 100GB of bandwidth this month. However, the server's RTG shows 3.4TB last 30 days, 121GB past 24 hours alone. Doesn't make sense. I'm trying to trace the cause of this. It's a shared web hosting server for approximately 300 domains. I would appreciate help tracing this down somehow. I utilize CSF firewall and Configserver exploit scanner as well. Day http://s10.postimg.org/ti1qhj5mx/day.png Week http://s7.postimg.org/8ho8kds57/week.png

    Read the article

  • Linux Experts Riddle: Network output of 10MB/s on 10GB/s NIC

    - by user150324
    I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC. Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second. I have tried: rsync (also with various encryptions), scp, wget, aftp, nc. Here's some testing results with iperf: [root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1 ------------------------------------------------------------ Client connecting to XXX.XXX.XXX.XXX, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.30 MBytes 10.9 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 1.0- 2.0 sec 1.28 MBytes 10.7 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 2.0- 3.0 sec 1.34 MBytes 11.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 3.0- 4.0 sec 1.53 MBytes 12.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 4.0- 5.0 sec 1.65 MBytes 13.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 5.0- 6.0 sec 1.79 MBytes 15.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 6.0- 7.0 sec 1.95 MBytes 16.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 7.0- 8.0 sec 1.98 MBytes 16.6 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 9.0-10.0 sec 2.05 MBytes 17.2 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.68 MBytes 14.0 Mbits/sec I guess HD is not the bottleneck here.

    Read the article

  • Network Configuration

    - by Dario
    Hello, This is my situation: Router A: IP 192.168.1.1 Mask 192.168.1.0/24 - Connected to the internet. Server: - Interface eth0: inet addr:10.1.1.125 Mask:255.255.255.0 (connected to router B) - Interface ra0: inet addr:192.168.1.125 Mask:255.255.255.0 (connected to router A) Router B: IP 10.1.1.254 Mask 10.1.1.0/24 - Connected to Server's eth0 Computer: connected to Router B via WiFi connection. I configured a static route on Router B that use as default gateway 192.168.1.125 and i can ping that ip from computer. The problem is: how i can connect to the internet ? In other words, traffic coming from Server eth0 should use ra0 as gateway. Any suggestion ? Thank you

    Read the article

  • Bridging Network Devices with Multiple IPs

    - by Andy
    I have a small server with a single NIC that I am trying to get a bridge functioning on so that I can run KVM. On this NIC I have a couple IPs statically assigned to it: eth0 = 192.168.1.1 eth0:1 = 192.168.1.2 eth0:2 = 192.168.1.3 eth0:3 -> Assign the bridge to this I am attempting to set up a bridge using the following instructions: sudo brctl addbr br0 sudo brctl addif br0 eth0:3 sudo ifconfig br0 192.168.1.120 netmask 255.255.255.0 up sudo route add -net 192.168.1.0 netmask 255.255.255.0 br0 sudo route add default gw 192.168.1.1 br0 sudo tunctl -b -u root -t tap0 > /dev/null sudo ifconfig tap0 up sudo brctl addif br0 tap0 However, when I do the second command: sudo brctl addif br0 eth0:3 It puts the ENTIRE eth0 device into promiscuous mode. This knocks the server offline and inaccessible by anything other than locally. Is there a way to bridge JUST eth0:3 to br0 and not put the entire device into promiscuous mode?

    Read the article

  • Map the version history in WSS as network drive

    - by MAD9
    Hi, I believe I once saw that it was possible to share versions of documents in WSS like the library itself. e.g. when the path is like http://myShare/SomeDocuments then it was like http://myShare/SomeDocuments/versions/1 or something like that ... I can't find information about it. Do I recall that right, is it possible at all and how do I do it?

    Read the article

  • Server 2008 Print Server vs Network printer

    - by cpgascho
    I have a SBS 2008 Server/Windows 7 environment. We have approx 25 users & 6 Networked printers. What is the recommended way/what are the advantages disadvantages of having all the printers shared through the Server 2008 Print server vs having workstations connect directly to the workstation printers. Off the top of my head using the server 2008 print server is a single point of failure but Should make it easier to deploy/manage the printers.

    Read the article

  • synchronous network audio

    - by intuited
    I'd like to have an audio transmission shared among computers on a LAN. Although there are various systems to do this -- shoutcast/icecast, pulseaudio, etc. -- I'm not aware of any that provide synchronization. I'd like to have different computers in the house playing the same audio, and have the same sample playing at the same time. Is there a system which can do this?

    Read the article

  • Cheapest way to connect 20-24 Sata II HDDs in a budget storage server?

    - by Joe Hopfgartner
    I need to assemble a high density storage server for as cheap as possible. It's been a while for me and the last systems I integrated didn't even have Sata yet... During my Research I of course stumbled about Nexsan SATA Beast, the BackBlaze storage Pods as well as some ridiculously overpriced HP Proliant or Dell storage solutions. Finally I choose Norco cases as the way to go. My eye is set on the RPC-4020, which is a 4U 19" Rackmount case with 20 Hot Swap 3.5" SATA/SAS Hdd trays (Backplanes included) and room for two 2.5" OS drives as well as a Slim Line CD-Rom. The backplanes connect with a single SATA port for each drive, so there are 20 internal SATA ports to to be connected. They also have redundant power ports which I think is quite nice. The cheapest price I have found is 290$ + 40$ shipping. In europe the cheapest unfortunately is 370€ (500$) + 40 € shipping... A nice alternative would be the RPC-4224 which has SFF-8087 Mini SAS connectors that bundle 4 SATA trays each. But it doesn't seem to be available in Europe (where i am) anywhere. So here comes my problem: What Mainboard/Controller to choose to connect them for as cheap as possible while still having nice data rates? I have to say that the server is intended as a Storage server with 1gps connectivity and the data transfer will be distributed very evenly across all drives. I also don't require any raid functionality. This is all done at application level, I just need JBOD. So for example if I go for the RPC 4020 Model I need to connect 20 Storage + 1 OS + 1 CDROM Sata ports. I searched a bit and stumbled across this very low priced controller: http://www.intel.com/products/server/raid-controllers/SASWT4I/SASWT4I-overview.htm They sell it for 115 € here and the specs say it can control up to 122 hard discs and has 4 Mini SAS connectors. So I would use 4 Mini SAS 36pin - 4 SATA 7pin cables to connect 4 SATA drives to each port and choose a Mainboard taht has 6 SATA on board (for example this one) and hurray, I can connect my 22 SATA devices for as low as about ~ 220 EUR (cpu, ram, psu, case not counted) Question: WOULD THAT WORK? And if not, why? 2nd Question: If I go for the 4220 or 4224 Model, I have internal Mini SAS connectors. Am I right in assuming that the backplane than acts as a "SAS Expander"? And can I just plug these SAS connectors into any SAS port I can find on my controller / mainboard or are there certain requirements? I know that SATA port multipliers only work with controllers that are ready for that. But isn't this expansion already implemented in the SAS standard? I am sorry that this is a very broad question, but I really spent the last week reading up and it seems to be not so clear! Especially all the controlling hardware specifications! 3rd Question: A lot of hardware specs feature "internal channels" and "internal connectors". The connecors are the physical numbers of places where I can plug a cable in. I got that. But are the "internal channels" always the maximum numbers of physical drives that can be used in the end? Or can I enhance this further by Expanders/Fanouts? 4th and last question: What do you think about the setup so far? Do you know any good alternatives? Maby I am completely going the wrong way and some DAS would be way better? Are there any comparable chassis available in europe? Please feel free to say whatever you think is relevant to the subject!

    Read the article

  • Connecting a network printer via a Thecus N2100 - works in Vista, not in Windows 7

    - by Jon Skeet
    I have a Lexmark E250d printer attached to a Thecus N2100 NAS. On Windows Vista I've managed to configure this using an "Internet" printer port with the URL of http://thecus:631/printers/usb-printer. I can add a printer in a similar way in Windows 7, but it never manages to print the test page. If I go to "Configure Port" in Vista, it just has "Security Options" - on Windows 7 it's asking about Raw mode vs LPR mode etc. On Vista I'm using an E250d-specific driver from Lexmark; on Windows 7 there's a Microsoft E250d driver, or a Universal PCL XL driver from Lexmark... I wouldn't expect this different to be related to the problem, but I thought I'd mention it anyway. (Lexmark doesn't have a Windows 7 E250d-specific driver as far as I can see.) Any suggestions? I was thinking of upgrading my main laptop from Vista to Windows 7, but I'd really like to get this sorted first... EDIT: If I connect to http://thecus:631/printers/usb-printer via Chrome while capturing with Wireshark, I get this response: HTTP/1.1 200 OK Date: Wed, 06 Jan 2010 16:47:23 GMT Connection: Keep-Alive Keep-Alive: timeout=60 Content-Language: C Transfer-Encoding: chunked Content-Type: text/html;charset=iso-8859-1 0 No idea what that's meant to be doing... EDIT: On further consultation, this would appear to be the Internet Printing Protocol which is layered on HTTP. Printing a test page successfully from Vista posts to that URL. Will attempt the same on Windows 7...

    Read the article

  • Network Printer installs driver every time i connect

    - by Patrick Schneider
    running a Citrix XenApp 6.5 Farm with a strange problem. Using Ricoh UPD 3.10 and every time a user log on into a Citrix Session and get the printers connected (via logon script) Windows shows the "Finishing installation..." dialog for every printer. The Drivers are installed on all Citrix Server and every time a user connects a printer the dialog appears. Are there any settings how to disable this behaviour?

    Read the article

  • how to test lan network speed with ps3

    - by Damon
    I am having a heck of a time trying to get videos streaming to my PS3. I want to know if there is some way to test the speed between my PC and ps3 to see if that is the issue. For some reason when things get really slow between the ps3 and my computer (it's a wired connection through a switch), the wireless internet on completely different computer gets slow. I don't know what's causing what, but I don't know why there should be any correlation between internet speed on a wirelessly connected laptop and the LAN speed between a wired ps3 and computer.

    Read the article

  • CSS, JS and images are not loading while sharing WAMP over local network

    - by Hardik Thaker
    I have share my wamp over my personal LAN . (Server IP : 192.168.0.100) When I am trying to access wamp server it's working perfectly. But when I open website hosted on server using client machine (192.168.0.103) , it doesn't load CSS - Images and JS files. So I saw console and found that my browser is trying to load : localhost//mysite/css/style.css And failing to load resource. now when I try to load the same resource directly from browser using 192.168.0.100/mysite/css/style.css It's showing me css file ! Now I am confused how to solve this problem so that my browser load that css perfectly ! Please help me ! thanks in advance !

    Read the article

  • Monit network availability checking

    - by viraptor
    Hi, I'd like to start a service with monit but only when I have the correct ip bound to the host. Can this be done somehow with the normal config? For example I want to start a process xxx with pidfile xxx.pid, but only if host currently has 10.0.0.1 bound to some interface.

    Read the article

  • Win 2k3 server's network problem.

    - by Sam
    I'm running 4 of Win2k3 64bit servers in the same subnet. It's been more than an year that I've running them without a problem. Recently, I kept losing the connection to one of the server. Let's say it's 'server A' which has a problem. Losing the connection means that I can't access to server A from the other servers. I've checked if server A has any internet connection problems or are there any abnomal event logs in the eventvwr - but haven't found any problems. The problem usually resolved if I restart the server again. But as time goes by, it keeps happen again and again. I can't afford to restart the server every time, and I really want to find out the reason. Can anyone help me out? Let me know if you guys need any of more information.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Postgresql by network

    - by sev
    I have running PostgreSQL sever on 192.168.0.102:5432. postgresql.conf has this line: listen_addresses = '*' and pg_hba.conf has this one: host all all 127.0.0.1/32 trust I have Rails app with same config/database.yml development: adapter: postgresql host: 192.168.0.102 port: 5432 encoding: unicode database: test pool: 5 username: test password: But when I run rake db:migrate I get (I run this from 192.168.0.100) FATAL: no pg_hba.conf entry for host "192.168.0.100", user "test", database "postgres", SSL on FATAL: no pg_hba.conf entry for host "192.168.0.100", user "test", database "postgres", SSL off ... Who can help with this?

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >