Search Results

Search found 594 results on 24 pages for 'serverfault'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • Adding a Second Wireless Router to an Existing Wired Network

    - by KVCrawford
    I apologize ahead of time, I know this has been asked before, but I'm still having problems...maybe you guys can help. I started out with the basic instructions from the highest-voted answer at http://serverfault.com/questions/41572/adding-a-second-wireless-router-to-my-network The new Wireless router in question is a Linksys Wireless-N Gigabit Router, Model # WRT310N Here are the steps I've taken in setting it up: Plug my laptop into LAN port #2 in the new router. Nothing else is connected at this point Configure the new router to be 192.168.1.200 (the original router is 192.168.1.1, and its DHCP clients are from 192.168.1.100-x.x.x.199) Set the internet connection on the new router to "DHCP Client" Turn off the DHCP server & NAT routing on the new router Plug in a LAN cable from the original router into the LAN port #1 on the new router (NOT the WAN port, nothing is plugged in there) Reset the new router Afterwards, I try to ping 192.168.1.1 from the laptop plugged into LAN port #2 on the new router, with no response. 192.168.1.200 garners no response either. Typing "ipconfig" tells me: Autoconfiguration IP Address: 169.254.198.113 Subnet Mask: 255.255.0.0 Default Gateway: 169.254.198.113 What's going wrong? I appreciate any help!

    Read the article

  • CouchDB Errors: "undefined symbol: js_fgets" on Ubuntu 10.04

    - by MattEzell
    Hello, ServerFault Community. I have been wrestling with this problem for weeks without resolution and was hoping that someone here might be able to help. As indicated above, this is in Ubuntu 10.04 (x86) using CouchDB 0.11.0. I have build and installed CouchDB 0.11.0 from source. Everything with the dependencies and with the CouchDB install itself 'goes off without a hitch' - no errors or complaints in the configure, make or install... CouchDB seems to be running 'fine'. I can access Futon without issue and utilize all CouchDB functionality found in Futon... Unfortunately, when I attempt to use any shows/views for an installed Couch App, I get the above js_fgets error before terminal (and the Couch log) fills up with TONS of JSON. Nothing ever renders in the browser, though Firebugs reports. I have used the official instructions (paying special attention to the 10.04 instructions) and have followed pretty much every Google thread that I can find on similar issues. I have chase SpiderMonkey (and Rhino) as well as Erlang as the culprit, but despite reinstalls and tests with these components, I still cannot get past this CouchDB issue on my system... Ideas? Pointers? Suggestions? Has anyone successfully installed and used CouchDB 0.11.0 on an Ubuntu 10.04 system to RUN APPLICATIONS? I have come across multiple individuals who immediately respond 'yes, I have it installed - it works great' only to have them realize in the end (as I did) that just because Futon thinks things are working, doesn't mean CouchDB is properly handling ALL requests. Thank you for your time and assistance!

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • Recommendations for colocation in the US

    - by Emil
    Hello serverfault I work for a European media company and we are currently looking for colocation in the US. I know the European market quite well unfortunately that is not the case for the US. I'm hoping for you guys to help me out a bit with a few questions, it would be much appreciated! I am looking for a data center that can deliver a high level of availability (tier 3 or better). The installation will be fairly large so capacity is important. Good internet connectivity/carrier presence. However most important is good customer support, skilled dedicated and responsive technical staff, since we won't have tech staff close by. I'm looking for a small and fast moving company that target internet businesses rather than big old enterprise hosting. What locations should we go for given that we want to reach all of the US from a single site and still maintain decent latency? (do we need east and west coast?) Where are the main internet hubs and should you try and get as close as possible? Are there any good online resources I should look at? Where do the large scale internet/media services colocate? Lastly I would be very happy to get some actual recommendations for companies to talk to P.S I'm happy to return the favor if anyone has question regarding data centers and colocation in Europe.

    Read the article

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • MS licensing of multiple RDP sessions for non-MS products in Windows XP Pro

    - by vgv8
    Question 1) and 2) were moved into separate thread Which Windows remote connections bypass LSA? and what r definitions of login vs. logon session? 3) Do I understand correctly that multiple remote RDP sessions are supported by Windows XP but require additional (or modified) licensing? Which one? Or it is always illegal to run multiple RDP sessions on Windows XP? even through non-MS commercial software? ---------- Update1: I already understood my error - the main questions were about definitions (important to find the common language with others) and the licensing questions were collateral - but it was already answered. I shall try to separate these questions leaving here the questions about RDp licensing and migrating other questions into separate thread ---------- Update2: Trying to "work around" licensing terms is pointless and wasteful of time I never try "working around" and I never ask anything like this, I am not specialist in licensing. My clients/employers provide me with tools and licensing support. They have corporate lawyers, planning/accounting/purchase departments for these issues. The questions that I ask is the matter of scalability and efficiency (saving my and others time) in my developing work. For ex., Just because I need autentication against Windows AD it is time-saving to use ADAM instead of deploying full-fledged AD with DC + servers + whatever else? Nobody is forcing you to use Windows XP I shall not rush into re-installing all my operating systems on all my development machines (at home, at client premises) just because a few guys have a lot of fun downvoting development-related questions in serverfault.com. If I do so, I make a joker from me in the eyes of my clolleagues et al Update: I unmarked this question as answered since it had not even adressed the question, at least mine. Should I understand that Terminal Server PRO, allowing Windows® XP and Windows® Small Business Server 2003 to host multiple remote desktop sessions, is illegal? Related: My answer to question Has windows XP support multiple remote login session (RDP) at a time?

    Read the article

  • filesystem compatible with freenas and windows

    - by Daniel
    Hi all, I'm planning on using FreeNAS (was considering openfiler but freenas seems simpler) for my home NAS box running off ESXI. I have managed to get local sata drives to mount in ESXI (http://serverfault.com/questions/216902/esxi-add-datastore-without-partitioning). I've had one of the drives fail on my before and I was able to retrieve most of the data off it using windows tools (I'm not much of a linux guy I know enough to be dangerous!). If I go the freenas route in the event that something goes bad what would be the best file system to use so that I could pop the drive out of the freenas box (vm) and put it in another pc running windows so I could try and run various recovery tools to get the data back. All in all its not a major problem if I lose the data just would be a bit annoying, so I'm not looking for suggestions around backing up etc. I was considering using NTFS that the drives are already formatted as but it appears that while freenas does support NTFS that its a bit buggy and not 100% reliable, anyone know if this is still true? Read that on a forum somewhere.

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • What should I be doing while I wait for a progress bar?

    - by Malnizzle
    So I am sitting here waiting for a progress bar to run (20 mins or so), and was wondering how best to use my time as a SysAdmin. I debated not posting this question briefly, as this could get flagged as subjective, but I think it's an important question, and a question that can be legitimately answered (per the FAQ) I know this something a lot of sys admins deal with, especially if they are client-based I would venture to guess. There is a lot of material out there about how to multi task, but SysAdmin work is unique in this area as well. I could switch over to another project, but I could get wrapped up in that, and forget about the original project I was working on, and that's hard if you are billing a client for your time, both for tracking your time, as well as being fair to that client. I could check ServerFault, but that isn't directly work related, I could sort my email, so on and so forth. What do you do, or what should I do when I have time waiting for a progress bar? Thanks! (download done, back to work!)

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • Finding a shared HDD attached to the network from my F-13 machine

    - by Ramy
    Sorry for the slew of n00bie questions, but here is one more. I recently partitioned my 1.5TB harddrive according to this question I then bought this to attach the harddrive to my network. The problem is, how do I navigate to the hard drive to move files over the network to the HDD. should this be moved to serverfault? update: the disk isn't even showing up when i call "fdisk -l" (as root). How can I mount it if I can't even find it? [root@Moonface ~]# /sbin/fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00018598 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 19458 155777024 8e Linux LVM Disk /dev/dm-0: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1: 4764 MB, 4764729344 bytes 255 heads, 63 sectors/track, 579 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-1 doesn't contain a valid partition table Disk /dev/dm-2: 101.0 GB, 101032394752 bytes 255 heads, 63 sectors/track, 12283 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-2 doesn't contain a valid partition table

    Read the article

  • Tips and Suggestions IP Address Re-Addressing?

    - by RSXAdmin
    Hello serverfault Universe, My ever evolving and expanding local area network is currently using a class-C address. My network consists of multiple subnets depending on site/location. 192.168.1.x is site HQ 192.168.5.x is secondary site 192.168.10.x is so on and so forth. Long story short - I have inherited this network design from the previous admin who has left the company which started off with a dozen people and now has just over 300 full time/part time employees. We do not yet have client VPN access; but we do have site to site VPN setup. My question is, in preparation for outside client access to my network via Cisco ASA, I would like to re-address the HQ site because I understand a 192.168.1.x or 192.168.0.x are not very good choices for a company subnet - it may conflict with a home user's LAN when connecting to my LAN, I believe? Through your experience, does anyone out there have any suggestions and tips on how I can proceed with re-addressing my subnets. If I designed this network I would have gone with a 10.0.0.0 (mask 255.255.255.0) so I am leaning towards changing it to fit. Thank you.

    Read the article

  • How to add a privilege to an account in Windows?

    - by mark
    Given: A VM running Windows 2008 I am logged on there using my domain account (SHUNRANET\markk) I have added the "Create global objects" privilege to my domain account: The VM is restarted (I know logout/logon is enough, but I had to restart) I logon again using the same domain account. It seems still to have the privilege: I run some process and examine its Security properties using the Process Explorer. The account does not seem to have the privilege: This is not an idle curiousity. I have a real problem, that without this privilege the named pipe WCF binding works neither on Windows 2008 nor on Windows 7! Here is an interesting discussion on this matter - http://social.msdn.microsoft.com/forums/en-US/wcf/thread/b71cfd4d-3e7f-4d76-9561-1e6070414620. Does anyone know how to make this work? Thanks. EDIT BTW, when I run the process elevated, everything is fine and the process explorer does display the privilege as expected: But I do not want to run it elevated. EDIT2 I equally welcome any solution. Be it configuration only or mixed with code. EDIT3 I have posted the same question on MSDN forums and they have redirected me to this page - http://support.microsoft.com/default.aspx?scid=kb;EN-US;132958. I am yet to determine the relevance of it, but it looks promising. Notice also that it is a completely coding solution that they propose, so whoever moved this post to the ServerFault - please reinstate it back in the StackOverflow.

    Read the article

  • Cisco Spam Blocker, Iron Port, Lotus Domino, Integration Help

    - by NickToyota
    Hi serverfault universe, I work for a medium sized (roughly 200 user) company. We are attempting to intagrate our new Cisco Spam Video Blocker (ironport) device into our network so that it acts as an incoming filter then passes it off to our Lotus domino mail server. And also vise versa. The way our network is setup currently has an mx record pointing to our Domino mail SMTP incoming server which is currently setup to be an inbound gateway and filter (using symantec domino mail software). We want to replace the inbound gateway with the ironport. Our company has also invested in a pool of external IP addresses which I believe has been currently assigned to our web, email, servers. What would the proper course of action be to successfully integrate the device be? Mx record change? Replace the domino gateway completely with the ironport? We attempted to set the ironport device to the external IP of what our mx record is pointing to without much success. Any help on proper setup would be greatly appreciated.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Managing Linux Directory Permissions & SFTP

    - by Dizzle
    Good morning; I have a RHEL 5.7 web server configured to allow SSH/SFTP only by specific groups. I'd like for content managers to upload content to their respective directories and have that content inherit the user/group ownership of the directory regardless of upload method or application. For example: John is in group "web" for SSH/SFTP rights and "finance" for directory permissions, and uploads to directory "webstuff" via SFTP. Directory "webstuff" has permissions of "2760" (rwxrws---), and ownership of "apache:finance". If John uploads an update to an existing file in "webstuff", the ownership of the file stays at "apache:finance". If John uploads a new file to "webstuff", the ownership of the file is "john:finance". My desire is to have any file from John uploaded to "webstuff" to change to the directory's owner. I've tried with setuid and setgid both set, but the user-ownership didn't take. I've seen mentions on ServerFault of using ACL's, or a chrooted jail for SFTP but I have yet to configure and test them, and I don't know if they're a viable solution (they could be, I just don't know because I've never done either). Any thoughts and assistance would be greatly appreciated.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >