Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 253/448 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • How to move mail accounts when migrating webhosting

    - by pkswatch
    I am migrating my website abc.com from one webhosting company to another in a shared hosting environment. Both have cpanel. And the second hosting account i am preparing to move is my multi-domain hosting account with 3 domains already in it. The problem is, i have many email accounts associated with my website abc.com, which are accessed using webmail. So if i move it to the other host, will i lose all those accounts and their emails? If yes, then how should i synchronise the email accounts so that all the accounts and the contained emails remain intact? I saw some several sync tools like IMAP Sync, etc. But these require two hosts while synchronizing, and as you see, i have just one domain name to be synchronized over 2 servers. PS, i do not have any ssh access on either of them, and i have made complete backup of all files using backup wizard in cpanel.

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • large RAID 10 vs small RAID1

    - by user116399
    The machine will store and serve millions of small files (<15Kb each), and all those files require a total storage space of 400G Considering the exact same SATA hard drives maker and models, on the exact same environment (OS, cpu, ram, raid controller, etc...) which one of the setups bellow would be faster? A) RAID 1 with 2 drives of 2T each, making up total storage of 2T B) RAID 10 with 4 drives of 2T each, making up total storage of 4T [EDIT]: I'm aware RAID10 is faster than RAID1. The larger the disk, at least in theory, the longer will take to do seeks/writes. So, will the performance gain of RAID10 will be outweighed by the "drag" caused the larger disk area when seek/write operations happened?

    Read the article

  • is there a GOTCHA - DBCC CHECKDB ('DBNAME', NOINDEX)?

    - by Deb Anderson
    I am turning on DBCC CHECKDB in our OLTP environment (SQL 2005,2008). System overhead is a very visible thing on our serversso I want them to be as efficient as it makes sense for them to be. HENCE - I want to turn on the NOINDEX option, an option I've never used before. My thoughts are these: if there is a problem with an index that is detected outside the integrity check, that I can just rebuild the index. Also the duration of the integrity checks will be drastically reduced, and the nastier corruption will be detected. What is the flaw in my plan? Thanks, Deb

    Read the article

  • Setting the server to look for index.php file by default

    - by ????? ???????
    I am a web developer and I've requested our Sys-Admin to setup a server for my team that will be used as a development environment. The PHP is running as a CGI. When i try to open http://myaddress/ I receive 403 Forbidden. When I try to open http://myaddress/index.php everything is fine. How do I set the server to look for index.php file by default? P.S The sysadmin is not currently here, so he cannot do it for me.

    Read the article

  • How to setup a simple Ubuntu Server Tomcat cluster on VirtualBox for testing?

    - by Alex Pakka
    I am looking for a step by step instructions to setup at leat two (and later more) simple Ubuntu Virtual Core 12.10 Server VMs on Oracle VirtualBox under Windows 7 64bit. The test setup would be: Apache HTTP server on the Windows host acting as a Load Balancer. The result will be that going to http://localhost:8080 would balance between two nodes and prove session replication. Two lean, small footprint Ubuntu Server guest nodes with Java 7 and Tomcat 7. The intention is to help everyone doing High Availability / Load Balancing development and testing to create a reasonable environment on the local workstation or mainstream notebook in as little time as possible.

    Read the article

  • SharePoint Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • SVN Active Directory authentication with ProxyPass redirect in the mix

    - by Jason B. Standing
    We have a BitNami SVN stack running on a Windows machine which holds our SVN repository. It's set up to authenticate against our AD server and uses authz to control rights. Everything works perfectly if Tortoise points at http://[machine name]/svn However - we need to be able to access it from http://[domain]/svn. The domain name points to a linux environment that we're decommissioning, but until we do, other systems on that box prevent us from just re-pointing the domain record. Currently, we've got a ProxyPass record on the linux machine to forward requests through to http://[machine name]/svn - it seems to work fine, and the endpoint machine asks for credentials, then authenticates: but when that happens, the access attempt is logged as coming from the linux box, rather than from the user who has authenticated. It's almost like some element of the credentials aren't being passed through to the endpoint machine. Has anyone done this before, or is there other info I can give to try to make sense of this problem, and figure out a way to solve it? Thankyou!

    Read the article

  • Clone current host OS to a guest virtual machine.

    - by ProfKaos
    I would like to run a few repeated checks on a setup procedure I am documenting, to verify my document. I would like to somehow create a VM that replicates the environment on my machine, i.e. the host machine, onto a guest VM. Then I can use Windows' System Restore on the guest to start at the point before I commenced the setup procedure and repeat it as many times as required until no more trial-and-error etc. is required to supplement my document. I have Virtual PC and VirtualBox available to install as host environments, running on Windows 7 Professional.

    Read the article

  • How to rewrite index.php (and other valid default files) to the document root using mod_rewrite?

    - by TMG
    Hello, I would like to redirect index.php, as well as any other valid default file (e.g. index.html, index.asp, etc.) to the document root (which contains index.php) with something like this: RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [NC,L] However, this is of course giving me an infinite redirect loop. What's the right way to do this? If possible, I'd like to have this work in both the development and production environment, so I don't want to specify an explicit url like http://www.mysite.com/ as the target. Thanks!

    Read the article

  • Automating the installation using SSH

    - by RAY
    I am running a bash script from a remote host to run a binary file which installs 64 bit JDK 6 update 29 on multiple VMs across the Environment. It is installing the file but, at the last line i have to hit a enter to complete the installation. I want to fully automate the script where i do not have to hit the enter at the last line. This is what i am using ssh ${V_TIERS}@${V_TIERS} 'cd JDK; sh jdk-6u29-solaris-sparcv9.sh' It updates as desired, but during install i have to hit enter to continue and complete the installation. Can anybody please help to fully automate the update process.

    Read the article

  • How to make a secure MongoDB server?

    - by Earlz
    Hello, I'm wanting my website to use MongoDB as it's datastore. I've used MongoDB in my development environment with no worries, but I'm worried about security with a public server. My server is a VPS running Arch Linux. The web application will also be running on it, so it only needs to accept connections from localhost. And no other users(by ssh or otherwise) will have direct access to my server. What should I do to secure my instance of MongoDB?

    Read the article

  • I'm trying to setup Xvfb to run an GUI app on a remote server with no display

    - by jz87
    I have a 3rd party java app that I need to run on a remote server. Unfortunately, the app is designed for the desktop and assumes a GUI is available. The thing is I would like to leave this app running on the remote server without having to tie up my desktop machine with a persistent VNC connection to the remote machine. I'm trying to setup Xvfb on the remote machine so emulate a graphical environment, connect to the remote machine via VNC to launch the app and configure parameters and then log off and let it run. Here's what I have so far: I have ubuntu 11.04 server apt-get install xvfb apt-get install fluxbox apt-get install x11vnc Xvfb :1 -screen 0 1024x768x16 & fluxbox & At this point I run into a problem because it gives a very undescriptive error: Cannot connect to server. How do I know if the server is running and that it's running properly?

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • Bacula configuration for clients that are turned on and off randomly

    - by Rastloser
    I'm evaluating Bacula as a centralized backup tool for a small network where users will turn machines on and off unpredictably. Some of the headless Linux boxes I need to back up are intended to be turned off by pressing the on/off-button on the case, without any way of telling the user to wait for a backup job to finish. So, we don't know when backup jobs may run (anacron might help with this, right?) and we don't know whether they'll be allowed to finish. Is Bacula a reasonable choice for such an environment?

    Read the article

  • Any Windows based OpenID servers out there? [closed]

    - by Brian Knoblauch
    I've been looking to setup an OpenID server for a special project, but haven't found any workable OpenID server software packages. Originally was looking for a *nix solution, and found several, but they all had some kind of issue. So far I've tried JOIDS, community-id, and a couple others I unfortunately can't remember the names of. I've also come to the conclusion that even if I had managed to get one of those going that the management/upgrade cycles would have placed undue burden on the company (only a couple part time sysadmins with *nix knowledge, the day to day people are primarily Windows). So, I'm hoping that there's a Windows one out that will be functional that someone knows about and will be easy for a minimal support environment...

    Read the article

  • Why do mapped drives only reappear after logging out and back in, and not after a reboot?

    - by razumny
    I work in a corporate environment, where we use mostly Windows 7 Professional computers, though some legacy applications are still being run on Windows XP. We have security in place on the network not to allow access to network resources to computers that are not members of Active Directory. When logging in, our users get their home folder and a common network drive mapped to H: and F:, respectively. Sometimes, this does not happen, and the drives are not mapped. The solution is to have the user log off, and back in to Windows. If they reboot, the drives remain unmapped. Does anyone know why this may be?

    Read the article

  • How to configure installed Ruby and gems?

    - by NARKOZ
    My current gem env returns: RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] - INSTALLATION DIRECTORY: /home/USERNAME/.gems - RUBYGEMS PREFIX: /home/narkoz - RUBY EXECUTABLE: /usr/bin/ruby1.8 - EXECUTABLE DIRECTORY: /home/USERNAME/.gems/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/USERNAME/.gems - /usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gempath" => ["/home/USERNAME/.gems", "/usr/lib/ruby/gems/1.8"] - "gemhome" => "/home/USERNAME/.gems" - REMOTE SOURCES: - http://rubygems.org/ How can I change path /home/USERNAME/ to my own without uninstalling? OS: Debian Linux

    Read the article

  • NGINX - Two different rails apps under same domain

    - by Murkin
    I have two different Rails (passenger) apps that I wan to host on one server: somehost.com/ <-- App #1 somehost.com/admin <--- App #2 Tried playing with the 'location' directive, but failed to have both operate. Can someone suggest the correct approach ? (I would prefer both to share same environment, only launch from different directories) EDIT: Sample (desired) config Trying to do something like: server { listen 80; server_name myhost.com; rails_env production; passenger_enabled on; location / { root /opt/main_site/public/; } location /dev { root /opt/admin_site/public/; } }

    Read the article

  • AD Local Admins without password sharing

    - by Cocoabean
    My team is building out an Active Directory environment in a small grad school with support for general computer labs, and staff/faculty machine and account management. We have a team of student consultants that are hired to do general help desk work. As of now we have a local admin account on every machine. It has the same password and all of us know it. I know it's not best practice and I want to avoid this with the new setup. We want to have local admin accounts in case there are network issues that prevent AD authentication, but we do not want this account to be generic with a shared password. Is there a way we can get each machine to cache the necessary information to authenticate a group of local admins so that if AD is somehow inaccessible, student consultants can still login with their AD admin accounts?

    Read the article

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

  • Network Share unavailable after DNS Change

    - by Justin Largey
    Hi, I have a server, called Server1 with various network shares on it. Our users map to this share using \\Server1\FileShareName1. During a DR Test, we rerouted all network traffic from Server1 to Server21. All folder shares are set up on Server21. We were hoping the the network shares would still be accessible using \\Server1\FileNameShare1, unfortunately, they are not. Does anyone know why this is happening? This is a Win2003 Environment, and DNS was flushed. I confirmed that IP addresses are matching between the two servers. Any help or insight is much appreciated.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >