Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 233/963 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • GRUB2 UEFI booting from LVM on RAID (with XEN)

    - by pavian
    I'm experimenting with booting root fs from LVM volume inside the raid (mdraid superblock 1.x) via UEFI with GRUB2. Also I'm using Xen hypervisor. From grub command line I can see my lvm volume (ls command) but I got kernel panic due to "unable to mount root fs". I saw a note in this article telling it's probably impossible to boot root fs from raid via UEFI, but I don't understand the reason why not. Is it possible to boot linux with this configuration without the initramfs (which I don't won't to use)?

    Read the article

  • How to create a filesystem mountable by windows in linux?

    - by wcoenen
    I have attached an external USB disk to my debian gnu/linux system. The disk showed up as device /dev/sdc, and I prepared it like this: created a single partition with fdisk /dev/sdc (and some more commands in the interactive session that follows) formatted the partition with mkfs.msdos /dev/sdc1 If I then attach the USB disk to a Windows XP or Vista system, then no new drive becomes available. The disk and its partition show up fine in the disk managment tool under "computer management", but apparently the file system in the partition is not recognized. How do I create a FAT32 file system which can actually be used in windows? edit: I've given up on this and went with a NTFS file system created by windows. In debian lenny this can be mounted read-write but apparently it requires you to install the "ntfs-3g" package and explicitly pass the -t ntfs-3g option to the mount command.

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • Completely replacing (upgrading) a RAID 5 array of disks on an ESXi server

    - by jshin47
    I have a development server that runs several VM on ESXi 5. It has an array of disks in the RAID 5 configuration where all of the disks are currently the same size. I would like to expand storage on this box greatly, but I am not sure what the smartest way to go about this would be. My current plan is to: Turn off all VM Copy VM folders from server to another location Verify that I can mount all the VM on the new location (ie that the copy went ok) Replace all the disks with new, bigger ones Reinstall ESXi5 Copy the VM back over This seems like it might take a while to accomplish and is not terribly slick, especially since I will have to reconfigure ESXi 5, but is there a smarter alternative?

    Read the article

  • How to create a mysql database that can contain any character, also different languages

    - by Jakke
    I'm trying to create a database that has to contain articles in different languages. I'm using Mariadb as my server and I know bits of SQL. My knowledge doesn't really cover details like the differences between engines like MyISAM, InnoDB etc or character sets like utf8/16/32, latin 5/7/etc. I do know that the character set has importance, I guess what I'm looking for is an all-encompassing character set and an engine that best deals with this type of content. Also, is there an advantage in storing articles in multiple data rows (equivalent of different pages) to make things a little faster, or would you store a whole article in a single data row. Or does that depend on the size of the articles? Sorry for my noobish question, I know the information is all out on the internet but it would take me quite a long time to research and get a grip on everything. Would be cool if someone with experience could give me a little head start and point me in the right direction. This is for a intranet site, consider the content to be somewhat like a blog (and no, I don't want wordpress or something similar at this point). Not sure if it matters, but I tend to create and manipulate my tables with phpmyadmin, I use apache as web server and it all runs on Linux.

    Read the article

  • How do I unmount a charging device in Linux?

    - by PeanutsMonkey
    I have a mobile device attached via USB to a Linux box and wish to unmount it. I ran the command fdisk -l however it does not list a mount point. I then ran the command lsusb which yielded the screenshot below. I then proceeded to search the /dev/disk/by-id directory and was found the following file The file is a symbolic link to what appears to be /dev/sdc Questions Why does it not appear when I run the command fdisk -l? How do I unmount it properly without simply yanking the USB cord from the USB port?

    Read the article

  • Putting servers inside a refrigerator? [closed]

    - by Muhammad Jamal Shaikh
    It could be a silly question, but I decided to go for it. I shall be buying 3 servers in the next few weeks to set up a small webfarm at my home. I am told by different people who work in server rooms, that I should keep my servers in an air conditioned room. Which is really expensive, because the temperature here in south asia is between 10 to 50 degrees C. Here comes the funny part: I have an extra fridge in my home, why shouldn't I put the servers inside that fridge? Benefits: I don't have to buy the air conditioner. I don't have to buy the rack mount for the servers. The electricity consumed by the fridge is much much less than AC. Give me your suggestions!

    Read the article

  • How can I enforce directory space limits in a OpenVPZ system?

    - by George
    The title says it all. I have some programs on a server (centos4-openvz) that use a directory as temp directory - but pay no attention to the size it grows. I want to enforcee a limit, like this folder cannot exceed 300mb. I would use quota but OpenVZ does not support loop devices to mount a file as such. Any other solutions? (apart from scripting a periodic delete of files in the directory). Editing the application's code to implement such a functionality is not entirely out of the question - if it can be done easily and no other ways exist.. Its written in cpp - but I don't know how to implement it.

    Read the article

  • Am I able to forward traffic from an external subdomain to a specific local host?

    - by George Bowman
    I apologise in advance if the question doesn't make sense, please let me know. I've got a small LAN (~10 Virtual Servers) using Win Server 2008 as a DNS server. This is behind a smoothwall express 3.0 firewall with ports forwarded for specific services. I have a domain (123-reg) with the NS's that of afraid.org (DynamicDNS) and subdomains pointed to my (Dynamic) IP address e.g. subdomain1.example.com - 123.456.789.101. I think that adequately explains my set up. My question is, am I able to have subdomains e.g. subdomain1.example.com only point to a specific local host? Like so: subdomain1.example.com:80 - firewall(external facing) - server1.example.com:80 subdomain2.example.com:80 - firewall(external facing) - server2.example.com:80 I don't actually necessarily want to use port 80, otherwise I would just use VirtualHosts on apache, it is just an example port. Currently I can use either subdomain1.example.com OR subdomain2.example.com and they will both point to server1.example.com:80 I do not have to stay using Win Server 2008 for DNS, I am more than happy to move over to BIND if needs be, it was just easier to use Win Server 2008's DNS. I do not know if this is even possible, I have a feeling it isn't as I've only got one external IP address but any information is useful!

    Read the article

  • Is it secure to store the cert/key on a private AMI?

    - by Phillip Oldham
    Are there any major security implications to bundling a private AMI which contains the private key/certificate & environment variables? For resiliency I'm creating an EC2 image which should be able to boot and configure itself without any intervention. After boot it will attempt to: Attach & mount specific EBS volume(s) Associate a specific Elastic IP Start issuing backups of the EBS volume(s) to S3 However, to do this it will need the private key/pem files and will need certain environment variables to be available on start-up. Since this is a private AMI I'm wondering if it will be "safe" to store these variables/files directly in the image so that I don't need to specify any user-data information and can therefore start a new instance remotely (from my iPhone, if needed) should the instance be terminated for any reason.

    Read the article

  • In Stud, which Private RSA Key should be concatenated in the x509 SSL certificate pem file to avoid "self-signed" browser warning?

    - by Aaron
    I'm trying to implement Stud as an SSL termination point before HAProxy as a proof of concept for WebSockets routing. My domain registrar Gandi.net offers free 1-year SSL certs. Through OpenSSL, I generated a CSR which gave me two files: domain.key domain.csr I gave domain.csr to my trusted authority and they gave me two files: domain.cert GandiStandardSSLCA.pem (I think this is referred to as the intermediary cert?) This is where I encountered friction: Stud, which uses OpenSSL, expects there to be an "rsa private key" in the "pem-file" - which it describes as "SSL x509 certificate file. REQUIRED." If I add the domain.key to the bottom of Stud's pem-file, Stud will start but I receive the browser warning saying "The certificate is self-signed." If I omit the domain.key Stud will not start and throws an error triggered by an OpenSSL function that appears intended to determine whether or not my "pem-file" contains an "RSA Private Key". At this point I cannot determine whether the problem is: Free SSL cert will always be self-signed and will always cause browser to present warning I'm just not using Stud correctly I'm using the wrong "RSA private key" The CA domain cert, the intermediary cert, and the private key are in the wrong order.

    Read the article

  • putting servers inside a fridge! [closed]

    - by Muhammad Jamal Shaikh
    hi , i think its a silly question , but i decided to go for it. i shall be buying 3 servers in next few weeks for setup a small webfarm at my home. i am told by different people who work in server rooms , that i should keep my servers in a Air Conditioned room. which is really expensive.because temperature in south asia is b/w 10--50(Centigrade). here comes the funny part, i have an extra fridge in my home , why shouldn't i put the servers inside that fridge. here are benefits listed i dont have 2 buy the air Conditioner i dont have to buy the rack mount for the servers the electricity consumed by the fridge in much much lessor as compared to an AC be free to give your suggestions :) thanks Jamal.

    Read the article

  • Can I format a usb stick on windows xp, in HFS+ format, and make the usb stick mac os x bootable?

    - by user717236
    My Intel Mac OS X computer is corrupt and I feel, at this point, I need to perform a fresh install of the OS. It consistently and automatically logs out, right after I log in. I tried logging in as the root. I tried safe boot and it wouldn't load. Anyway, the point is I want put the Mac OS X installer on a USB thumb drive and have it boot up on the Intel Mac OS X computer. Unfortunately, the computer is inaccessible, as I mentioned above. So, I have a Windows XP machine that I'm using and attempting to create a bootable USB thumb drive that's compatible with Mac OS X. I have tried transmac, macdrive, and paradox for windows -- all of which proved unable to format the usb stick in HFS+. How do I know this? Well, even though the Transmac reports that's been formatted to HFS+, Computer Management in Windows says otherwise: I even put the installer on the usb drive, after transmac reportedly formatted it properly, and the mac os x computer didn't even recognize that a USB thumb drive was inserted, via pressing the option key at boot-up time. I'm not sure what the problem is and how to actually format the drive. Can anybody offer any help? I would appreciate it. Thank you.

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Modifying the install environment for RH-like installations

    - by javanix
    I am trying to modify the basic installation environment (ie, what Anaconda runs in) for a customized CentOS distribution. For the first try, I would just like to modify a few of the splash images. My initial attempt to do this entailed: 1) Mount images/install.img to a directory ~/img/ 2) Copy all files from img/ to ~/tmpimg/ 3) Modify the splash images 4) mkisofs -o ~/final/install.img 5) cp ~/final/install.img back to my ~/cdroot/ folder and remake the iso. However, the generated .img in step 4 doesn't even come close to matching the file size from the original install.img (meaning that install.img must be created in some other fashion using compression), and it fails when I boot my iso. What settings should I be using to make the install.img file? Is there some other technique for modifying CentOS install environments?

    Read the article

  • How do I access files inside a Wubi virtual ext4 Ubuntu partition from within Windows?

    - by aalaap
    I just installed Ubuntu 10.04 using Wubi on a PC that has Windows XP and Windows 7 installed. I was working in it for a while and everything is just fine. However, when I booted back into Windows 7, I couldn't figure out a way to access the files I had created or downloaded into the Ubuntu partition. They're in a virtual disk called root.disk in my C:\ubuntu\disks. Is there a way I can mount this vhd into Windows or at least browse the contents and extract what I need?

    Read the article

  • Macbook Pro 2.2ghz 2011 (OSX 10.6.7) problem with NTFS 3G

    - by James
    I installed NTFS 3G but now get the following error message when I try to plug in my external drive. I also get it on startup about my Windows partition. Uninstall/ reinstall does not work. NTFS-3G could not mount /dev/disk1s1at /volumes/freeagent GoFlex Drive because the following error occured: /library/filesystems/fuse.fs/support/fusefs.kext failed to load- (libkern/kext) link error; check the system/ kernel logs for errors or try kextutil(8). The MacFUSE file system is not available (71) Any help would be great. I'd hope to avoid reinstalling OS X if possible!

    Read the article

  • Performance decrease in every game and application

    - by Márk Vincze
    When I start a game, initially it runs smoothly, but after a couple of minutes, the performance gradually decreases to the point of being unplayable (1-2 FPS). The sound also starts to lag at this point. This does not happen every time I start my PC, usually exiting the game, rebooting, then starting the game again solves the problem, and I can play with perfect FPS for as long as I want. I could not find any deterministic reason when this happens and when doesn't. It happens in every game I tried (SWTOR, Diablo 3, Skyrim), and not even games, but simple applications like a browser or the Control Panel can get unusably slow. This is a brand new PC I bought three months ago, and this problem occurs since the first day I've been using it. Could you provide any advice how to further diagnose the problem? I tried to reinstall Windows, and tried different video card drivers, but it did not help. It would be important to know whether this is a hardware or software problem, because I can use the warranty if it is a hardware issue. (I did not want to return the PC yet, because I can't reproduce the issue deterministically.) Spec of the pc: Motherboard: ASROCK H61M-HVS CPU: INTEL Core i3-2120 3.30GHz 1155 BOX Memory: KINGMAX 4096MB DDR3 1333MHz KIT Video card: GIGABYTE GV-R685OC-1GD HD6850 1GB GDDR5 PCIE HDD: SEAGATE 500GB Barracuda 7200rpm 16MB SATA3 ST500DM002 I am using Windows 7 64 bit. Thanks a lot in advance!

    Read the article

  • Alternatives to Crashplan for VPS?

    - by Chloe
    I use SFTP Net Drive to mount a remote VPS so I can back it up. However, it's taken over 3+ days to scan! I ran 'ls -lR' from my desktop over the mounted network drive and it only took about 5m to list all the files! There are only about 5000 files and 2 GB. I know Crashplan can run headless on the VPS itself, but that sounds like a pain to set up, and it takes so much memory on the server. The VPS doesn't have a lot of memory to spare - it's less than my desktop. Is there another program that can communicate with a Crashplan backup protocol and has a command line interface? backup /home

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • FreeNAS: Can't access shares on a windows 7

    - by rzlines
    I have just setup a FreeNAS server. I have formatted and added a second hard drive. And mounted the data partition of the first hard drive. I have enabled CIFS and added both the hard drives to the shares. My workgroup is default WORKGROUP, I can see other computers on my network. I can access freenas through my browser that's how I configured it. Got one more question do I need to mount my second drive, if I do what partition number do I put, as it keeps giving me the error of using the wrong partition. Also my second drive doesn't show up on the home page as available as I can only see my first drive's space there.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >