Search Results

Search found 4043 results on 162 pages for 'mod cluster'.

Page 135/162 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Is there a convenient method to pull files from a server in an SSH session?

    - by tel
    I often SSH into a cluster node for work and after processing want to pull several results back to my local machine for analysis. Typically, to do this I use a local shell to scp from the server, but this requires a lot of path manipulation. I'd prefer to use a syntax like interactive FTP and just 'pull' files from the server to my local pwd. Another possible solution might be to have some way to automatically set up my client computer as an ssh alias so that something like scp results home:~/results would work as expected. Is there any obscure SSH trick that'll do this for me? Working from grawity's answer, a complete solution in config files is something like local .ssh/config: Host ex HostName ssh.example.com RemoteForward 10101 localhost:22 ssh.example.com .ssh/config: Host home HostName localhost Port 10101 which lets me do commands exactly like scp results home: transferring the file results to my home machine.

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • JBoss behind NAT hostname problem

    - by z0mbix
    My company has a JBoss cluster sitting behind a firewall that performs NAT. We forward ports to JBoss from the firewall, so that our client application can access the server. We are having trouble when JBoss replies it tells the clients to connect to the internal hostname, not the external one with which the initial connection was made. Is this something that is easily resolved/configured? How are other JBoss app servers configured behind NAT firewalls? Split-horizon DNS? Many Thanks

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • Logging the client IP with Nginx/Varnish/Apache

    - by jetboy
    I have Nginx listening on port 443 as an SSL terminator, and proxying unencrypted traffic to Varnish on the same server. Varnish 3 is handling this traffic, and traffic coming in directly on port 80. All traffic is passed, unencrypted, to Apache instances on other servers in the cluster. The Apache instances use mod_rpaf to replace the logged client IP with the contents of the X-Forwarded-For header. My problem is that if the traffic is coming via Nginx, while the 'correct' client IP is getting logged in the VarnishNCSA logs, it looks as if Varnish is (understandably) replacing Nginx's X-Forwarded-For header with 127.0.0.1 downstream, and this is getting logged with Apache. Is there a nice simple way to stop Varnish rewriting X-Forwarded-For if it's already populated?

    Read the article

  • Unable to login into CentOS

    - by Rendl
    I had setup a multinode cluster using CentOS with VMware yesterday. Today when I reboot the nodes I get the below error on startup. "there is a problem with the configuration server status 256 centOS" (/usr/libexec/gconf-sanity-check-2 ) I am unable to login as root or any user as the screen is frozen. The solutions online is to change the permissions for some tmp files. My problem is I am unable to access the terminal as I cannot login. Also on reboot I do not have any recovery options in CentOS. I only see command line GRUB. I am new to linux and Hadoop.Pls help asap.

    Read the article

  • auth user and exec a node app only with apache?

    - by Blame
    I couldn't find an answer on the web and I'm trying for days now so I hope that someone with more experience with apache can help me out. Iam writing an web editor and the user should be able to edit a file that is on the server in a directory the user has access to. The problem Iam facing is that I need to authenticate against the system users (shadow/passwd). So the user should be able to login whith a system account and then the node app which does all the logic should be started with the users rights. I hope to get this working without any additional script and only with Apache. I found out two things: I can use mod_auth_pam to authenticate the user There is a mod called suEXEC which can exec the node app with a specified user The problem is that I have to hard code which user is used by suEXEC but I want to decide when the user logs in. Is there any way to authenticate a user against the shadow/passwd and then exec a prog with the users rights? I dont want to run the node app as root and the user should only be able to access his own files. Any help would be appreciated! Thanks, Kodak

    Read the article

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • How can one domain route to an always-changing pool of servers?

    - by ryeguy
    I'm sure this is an easy solution, I'm just not too familiar with how DNS works or if that's even related to this problem. If I'm running a web service on amazon ec2, distributed across many instances, how can I make it so a single domain name can be used to access the entire pool of servers, which will be changing from time to time? Since the instances may be present one second but gone the next (and vice versa), I need a way to randomly pick an active member of the cluster to route to. The updates would have to be instantaneous. Is this even possible, with dns caching and all?

    Read the article

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

  • Postfix cleanup daemon access control

    - by Flimzy
    Is there any way to control which hosts are permitted to connect to the cleanup daemon over TCP? Our 'master.cf' contains: 2526 inet n - - - 0 cleanup This is necessary because we have a cluster of SMTP servers running custom code, and they can all inject mail to the centralized postfix server via the cleanup daemon. However, we want to allow only our authorized servers to connect to the cleanup daemon. The current configuration allows any host to connect to port 2526. Clearly we can use iptables to restrict access, but is there a way to do this within postfix itself?

    Read the article

  • Can I recover files on a disk With 5% of start of disk completely wiped (overwritten with 1s)

    - by ARA
    Recently a virus attacked my pc and cleared 5% of my hard disk which has one partition I viewed the disk in a hex viewer program like active undelete ,cleared the virus data and overwrote it with 1s I want to recover a large file that is about 10gb, but no recovery tools seem to be able to recover any files. I want to know ,in theory, is this file recoverable ? I think that files are fragmented, researched about NTFS File System and i understand cluster information are just saved in MFT File ? Is there any way to recover file without a MFT structure ?

    Read the article

  • Forensics on Virtual Private servers [closed]

    - by intiha
    So these days with talks about having hacked machines being used for malware spreading and botnet C&C, the one issue that is not clear to me is what do the law enforcement agencies do once they have identified a server as being a source or controller of attack/APT and that server is a VPS on my cluster/datacenter? Do they take away the entire machine? This option seems to have a lot of collateral damage associated with it, so I am not sure what happens and what are the best practices for system admins for helping law enforcement with its job while keeping our jobs!

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • RSAT and double accounts

    - by Ryaner
    Since we are looking at migrating our domain admins to use non domain-admin accounts and runas for admin tasks a discussion has begun. How do others use RSAT with runas? I know you can Shift+RightClick run as other user to launch it with admin rights, but it looses the icon on the taskbar. The question also has been put, why do Microsoft release the RSAT tools if they recommend admins to run using non-domain accounts. Edit: Further to this, some of the initial testing with RSAT via the run as other user command hasn't worked out well. Few of the options don't function in the Hyper-V and Failover Cluster Manager.

    Read the article

  • Multiple public/private key pairs for the same user

    - by bruceb
    First, sorry if this question has already been asked/answered - I've searched but perhaps I haven't recognised the answer.... What we have is a cluster of servers which need to access a single remote server using sftp. We are migrating from one remote server to another at the same (remote) location. We also want to refresh the public/private key pairs on the configuration as part of an ongoing security review. My question is - can we have multiple public/private key pairs for the same user between server A and server B? I want to do this to allow for cutover testing - but am concerned that the software checking keys may only try one of each type (rsa/dsa?) before rejecting the connection method and moving to the next type of key. Hope it's a straightforward question - please let me know if I need to supply more details. Thanks in advance Bruce

    Read the article

  • Windows 2012 Master & Ubuntu Bind 9 Slave & SOA

    - by RecentCoin
    I'm kinda like the maid... I don't do Windows. But thanks to new things we're implementing, I'm now attempting replicating a single zone from our AD cluster. We had this working just fine but someone had to "adjust" it. That broke the replication completely. We've gotten that restarted but now a different DC is showing as the SOA. Does it matter which of the domain controllers is listed as the SOA? The contents of the zone file appear to be correct. Part of me says "Good enough. Leave it be." but the rest of me doesn't want a 3AM phone call. So does anyone know if it matters which DC is listed as the SOA?

    Read the article

  • High Availability Clustering and Virtualization

    - by tmcallaghan
    I'm trying to understand how the various virtualization vendors (specifically Amazon EC2, but also VMware and Xen) enable software vendors to provide a real HA solution in the environment where the servers are virtualized. Specifically, if I'm running any HA application (exchange, databases, etc) I need to ensure that my redundant virtual "servers" aren't located on the same physical server. Using in-house virtualization solutions (VMware, Xen, etc) I can provision accordingly as well as check the virtual - physical arrangement. I could, however, accidentally "vmotion" to the same physical hardware. With EC2, I don't even have the ability at provision time to select different physical servers. Since their Cluster Compute Instances are 1 virtual server per physical server it seems to be the only way to guarantee I don't have a false sense of redundancy. Any ideas or thoughts would be helpful. What are others doing about this problem? If the vendors provided an API where I could get something as simple as a unique physical system identifier I could at least know if I'm going to have an issue. -Tim

    Read the article

  • Why would the Apache parent process restart silently?

    - by miracle
    I run apache 2.2.9 with mpm prefork on debian lenny. Following http://httpd.apache.org/docs/2.2/mod/prefork.html, I would expect that there is one parent process, running as root and listening as configured, which would start child processes as defined by the Min/Max/etc. directives. I expect the children to be restarted as per MaxRequestsPerChild, but the parent process to stay put with one process id until I restart it manually. Out of a little paranoia, I started monitoring listening ports including process ids. I have a cron job every 20 minutes to run netstat -ap | grep LISTEN and diff the output. Sometimes (about once per day) I see a series of this: 8c8 < tcp6 0 0 [::]:www [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:www [::]:* LISTEN 6607/apache2 10c10 < tcp6 0 0 [::]:https [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:https [::]:* LISTEN 6607/apache2 Over a period of an hour or three, the parent would change its pid at least once every 20 minutes, without any explanation in the log files or any other hint that anything is going wrong. This is not what I expected. What am I missing?

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • Howto setup a `veth` virtual network

    - by Reinder
    I'd like to setup three virtual network interfaces (veth) which can communicate with each other. To simulate a three node cluster, each program then binds to one veth interface. I'd like to do it without LXC if possible. I tried using: Created three veth pairs: sudo ip link add type veth Created a bridge sudo brctl addbr br0 Added one of each pair to the bridge: sudo brctl addif br0 veth1 sudo brctl addif br0 veth3 sudo brctl addif br0 veth5 Configured the interfaces: sudo ifconfig veth0 10.0.0.201 netmask 255.255.255.0 up sudo ifconfig veth2 10.0.0.202 netmask 255.255.255.0 up sudo ifconfig veth4 10.0.0.203 netmask 255.255.255.0 up Then I verified if is works using: ping -I veth0 10.0.0.202 but it doesn't :( The I added IP addresses to the veth1,veth3,veth5 and br0 interfaces in the 10.0.1.x/24 range. But that doesn't help. Any ideas? or a guide, all I find in how to use it with LXC. Or am I trying something that isn't possible?

    Read the article

  • Is it possible to extend the Active Directory schema in a Windows 2003 DC (NOT R2) to support DFSR?

    - by JohannesH
    We're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

  • Should I never put a transactional replication distributor on a subscriber server?

    - by Stuart Branham
    What factors into choosing a distribution server for transactional replication? In our topology, we've always had the distributor reside on the publishing server. We rarely generate snapshots and performance is good enough, so this is okay for us today. One of our instances is moving to a cluster, so we need to move the distributor off for resilience/symmetry. Right now our two choices are to use a server physically close to the publishers, or our single subscription server. Our publisher is in our main office, and our subscriber is in a colocation facility off-site which our ISP runs. We have a pretty good line to it. The reason we're even considering the latter is to save work and licensing costs.

    Read the article

  • Is it possible to extend the ad schema in a Win2003 DC Server (NOT R2) to support DFSR?

    - by JohannesH
    we're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >