Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 157/221 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • The physical working paradigm of a signal passing on wire.

    - by smwikipedia
    Hi, This may be more a question of physics, so pardon me if there's any inconvenience. When I study computer networks, I often read something like this in order to represent a signal, we place some voltage on one end of the wire and the other end will detect the voltage and thus the signal. So I am wondering how a signal exactly passes through wire? Here's my current understanding based on my formal knowledge about electronics: First we need a close circuit to constrain/hold the electronic field. When we place a voltage at somewhere A of the circuit, electronic field will start to build up within the circuit medium, this process should be as fast as light speed. And as the electronic field is being built up, the electrons within the circuit medium are moved, and thus electronic current occurs, and once the electronic current is strong enough to be detected at somewhere else B on the complete circuit, then B knows about what has happend at A and thus communication between A and B is achieved. The above is only talking about the process of sending a single voltage through wire. If there's a bitstream and we need to send a series of voltages, I am not sure which of the following is true: The 2nd voltage should only be sent from A after the 1st voltage has been detected at B, the time interval is time needed to stimulate the electronic field in the medium and form a detectable electronic current at B. Several different voltages could be sent on wire one by one, different electronic current values will exists along the wire simutaneously and arrive at B successively. I hope I made myself clear and someone else has ever pondered this question. (I tag this question with network cause I don't know if there's a better option.) Thanks, Sam

    Read the article

  • How to restrict zone transfers to specific authorized servers only

    - by JonoB
    I recently failed a PCI compliance scan because of the following: This DNS server allows unrestricted zone transfers. Attackers may be able to use this information to gain knowledge on the structure of your networks to aid in device discovery prior to an actual attack. And the suggested solution is as follows: Reconfigure this DNS server to restrict zone transfers to specific authorized servers only. I am running a dedicated Linux Centos server. My understanding is that I have to edit the /etc/named.conf file, which I have done and the the relevant part is as follows: options { acl "trusted" { 127.0.0.1; xxx.xxx.xxx.001; //this is one of the server's ip's xxx.xxx.xxx.002; //this is another server's ip }; allow-recursion { trusted; }; allow-notify { trusted; }; allow-transfer { trusted; }; }; I then restarted the named service /etc/rc.d/init.d/named restart and requested a re-scan, which failed again for the same reason. Am I missing something obvious here?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • SQL Server Backup File Significantly Smaller After Table Recreation

    - by userx
    We run automated weekly backups of our SQL Server. The database in question is configured for Simple Recovery. We backup using Full, not differential. Recently, we had to re-create one of our tables with data in it (making 2 varchar fields a couple characters longer). This required running a script which created a new table, copied the data over, and then dropped the old. This worked correctly. Oddly though, our weekly backup files now SHRANK by over 75%! The tables don't have large indexes. All data was copied over correctly (and verified). I've verified that we are doing full and not incremental backups. The new files restore just fine. I can't seem to figure out why the backup files would have shrank so much? I've also noticed that they get about 10 MB larger every week, even though less than that amount of data is being added. I'm guessing that I'm simply not understanding something. Any insight would be appreciated.

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • Need troubleshooting advice for intermittent dns problems with requests on isp nameservers

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. My website is on a server running linux with Plesk. My dns records are configured with plesk (so my server is its own dns master). Domain name is registered with godaddy. I'm not real knowledgeable about dns, so I don't really know how to begin with troubleshooting. I've started learning to use dig, but while I can read the manpage to learn the syntax, I don't really know what questions to ask. Since the problem is intermittent I haven't been able to really catalog many symptoms. Symptoms I have observed: Certain people repeatedly reported intermittent problems connecting to my website. This was only from certain networks. (Ex: One guy could connect reliably from his office but not his home.) Sometimes I notice my browser taking a long time looking up the hostname for my site (Firefox shows a message in the status bar at the bottom). For me this is in the ten second range. ssh connections from anywhere to my server take a long time to connect but then seem to work fine once connected. So hopefully the folks on serverfault can point me to a good beginner tutorial for understanding dns, and suggest troubleshooting questions to ask next time one of my users reports connectivity problems.

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • How to set up QoS on ADSL router (terracom) for prioritizing browsing

    - by DBZ_A
    I want to configure the ADSL router which connects 10+ machines to the internet. I want to give maximum priority to browsing (ports 80,443) and set low priority for bittorrent etc.(port 42180) I have been experimenting with settings , but with no luck. There are three settings which confuse me, along with my understanding. 802.1 Priority - Related to LAN level, possible values 0-7 , higher numbers means higher priority. 'Mark traffic priority' - clueless about this. IPP/DS - IP Precedence - possible values 0-7 ; 6 & 7 are reserved, so set 5 for highest priority. Or when using DSCP - set 46 for highest priority. Please help me in getting this done... Similer question for another model of router here , but with less number of confusing options :) How to configure QoS on home router Update: from discussion on another thread, QoS can control only upstream traffic (from router to the internet) , while this may in turn affect downstream traffic rate, there is no direct control over data coming into the router.

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • Multiple PHP SAPI configuration

    - by DTest
    I'm trying to build PHP for use as an apache shared module --with-apxs2 but also with the 'php-cgi' binary (fastcgi) on Mac OSX 10.6. I'm using this ./configure : /configure --prefix=/usr/local/PHP \ --with-apxs2=/usr/local/apache/bin/apxs \ --disable-ipv6 \ --enable-cgi \ --with-curl \ --with-mysqli=/usr/local/mysql/bin/mysql_config \ --with-openssl=/usr \ --enable-ftp \ --enable-shared \ --enable-soap \ --enable-sockets \ --enable-zip \ --with-zlib-dir It builds the apache php5.so module just fine, but in /usr/local/PHP/bin, there is no php-cgi file. If I build it without the --with-apxs2 option (and indeed, I don't even need the --enable-cgi option) the php-cgi file gets built with no problems. Background on my setup: PHP 5.3.4, Apache 2.2.14, Mac OSX 10.6, Tomcat with JavaBridge (which is why I need the php-cgi file) Without the apxs2 option, /usr/local/php/bin/php -v produces: PHP 5.3.4 (cli) (built: Dec 21 2010 21:35:14) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies and /usr/local/php/bin/php-cgi -v produces: PHP 5.3.4 (cgi-fcgi) (built: Dec 21 2010 21:35:12) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My question is, what am I not understanding with php SAPIs that won't allow the building of the two modules at the same time? Also, can I build it --with-apxs2 the first time, then make clean and rebuild in the same PHP directory /usr/local/php for the php files without issue?

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • How to Refresh or Reset Windows 8 without the System Reserved partition?

    - by Karan
    The article Refresh and reset your PC mentions exactly what happens during the refresh and reset operations in Windows 8: Refresh The PC boots into Windows RE. Windows RE scans the hard drive for your data, settings, and apps, and puts them aside (on the same drive). Windows RE installs a fresh copy of Windows. Windows RE restores the data, settings, and apps it has set aside into the newly installed copy of Windows. The PC restarts into the newly installed copy of Windows. Reset The PC boots into the Windows Recovery Environment (Windows RE). Windows RE erases and formats the hard drive partitions on which Windows and personal data reside. Windows RE installs a fresh copy of Windows. The PC restarts into the newly installed copy of Windows. It is my understanding that Windows RE (Recovery Environment) is included as part of the System Reserved partition created by default on the first hard disk. The size of this partition has gone up to 350 MB from the 100 MB it used to be in Vista/Windows 7, no doubt as a result of adding these features. Now we have already discussed how to skip the creation of this System Reserved partition during Setup. Basically, the same techniques that used to work with Windows 7 work with Windows 8 as well. What I want to know is, what will be the exact repercussions of not having the System Reserved partition in place? I assume Troubleshoot / Advanced options should still be available as before: But what about the Troubleshoot menu itself? Will the Refresh and Reset options disappear? Will they remain but be unavailable? Or possibly they will throw an error if selected? Also, will it be possible to access and successfully execute these options if installation media is available? Anything else that might be affected?

    Read the article

  • Ubuntu 10.04: OpenVZ Kernel and pure-ftpd issues on HOST (no guest setup yet)

    - by Seidr
    After compiling and installing the OpenVZ flavour of kernel under Ubuntu 10.04, I am unable to browse to certain directories when connecting to the pure-ftpd server. The clients are dropping into PASSIVE mode, which is fine. This behaviour was happening before the change of kernel, however now when I browse to certain directories the connection just gets dropped. This only happens with a few directories under one login (web in specific), where as with another login it happens as soon as I connect. I've got the nf_conntrack_ftp kernel module installed (required to keep track of passive FTP connections as I understand, and an alias of the ip_conntrack_ftp module), however this has provided no alleviation of my problem. This module was actually required upon initial setup of my OS to get passive FTP working correctly, however when I compiled the OpenVZ kernel a lot of these modules were missing (iptables, conntrack etc). I recompiled the kernel with the missing modules, but to no effect. I've turned verbosity for the pure-ftpd server up, and still no clues have been spotted in either syslog or the transfer log. Neither did an strace provide any clues (that I could discern anyway) - although one strange thing is both in the output to the client and in the strace I notice that it does infact probe the directory and return the number of matches - it just fails after that. One more thing to mention is that if I FTP using the same credentials locally, everything works fine. This suggests that it is in fact an issue with either the conntrack_ftp module not functioning as expected, or a deeper networking issue. The Kernel was compiled and installed following the instructions at https://help.ubuntu.com/community/OpenVZ - bar the changes to the Kernel configuration (such as add iptables as a module). Below is an example of the log sent to the data (under FileZilla). Status: Resolving address of xxxx.co.uk Status: Connecting to 78.46.xxx.xxx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 4 of 10 allowed. Response: 220-Local time is now 08:52. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER xxx Response: 331 User xxx OK. Password required Command: PASS ******** Response: 230-User xxx has group access to: client1 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Status: Directory listing successful Status: Retrieving directory listing... Command: CWD /web Response: 250 OK. Current directory is /web Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PORT 10,0,2,30,14,143 Response: 500 I won't open a connection to 10.0.2.30 (only to 188.220.xxx.xxx) Command: PASV Response: 227 Entering Passive Mode (78,46,79,147,234,110) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 57 matches total Error: Could not read from transfer socket: ECONNRESET - Connection reset by peer Error: Failed to retrieve directory listing Any suggestions please? I'm willing to try anything!

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • If I scp a file through an intermediate server, is the file stored temporarily on the server?

    - by Blacklight Shining
    For the sake of simplicity (I find it easier to remember names than arbitrary letters), I will dispense with letters and use names to refer to the machines in this scenario. Say I have two machines, applejack and pinkie-pie, each on their own separate LANs and not in the same physical location. I also have a server, cadance, with a direct Internet-facing connection. I want to copy a file from applejack to pinkie-pie, so to avoid dealing with port forwarding and such, I set up an ssh tunnel from pinkie-pie to cadance (ssh -R etc cadance). Now I can connect to pinkie-pie from anywhere, by connecting to cadance and specifying an alternate port to use. I can also easily copy files to pinkie-pie with scp -P $that_port $some_file cadance:$some_path. My understanding of how it works is this: A secure connection is made from applejack to cadance I am authenticated to cadance A secure connection is made from applejack to pinkie-pie that spans the existing reverse tunnel and the new connection from step 1. I am authenticated to pinkie-pie Files are copied directly from applejack to pinkie-pie over this connection. Am I correct here? How secure is this approach? If I'm wrong…are files copied this way decrypted at cadance before being passed on to pinkie-pie? Is there a possibility that traces of unencrypted data could remain on cadance?

    Read the article

  • How to get my W2003-server (back) into the web (after setting up bridged networking)

    - by MBaas
    I have recently set up Virtualbox on a W2003-Server (which is also used as webserver, accessed from the web). My vbox worked nicely, but then I wanted more, I wanted to have the vm appear in the intranet like any ordinary pc. I was advised to setup bridged networking as opposed to NAT. I did so, and in the server's network connections have bridged the LAN-Connection and the "VirtualBox Host-Only Network" (yes, it says "host only network", but I assure that VBox networking is configured to use network bridge). So now my VM is visible in the intranet and it also has www-accesss, the server can also access the web. The only problem that came up is that the server is no longer accessible from the web. I've traced an HTTP-Request and it says "Can't connect to *:80 (connect: No route to host)". So maybe something in the router's config needs to be adjusted (yeah, well, the server's IP-Address changed from 192.168.1.199 to ...198). So I went into the router-config, reviewed port-forwarding for port 80 and adjusted the IP there, but it still didn't work. Unsure if it was a router-problem or rather something in the server's config, I've setup a "demilitarized zone" in the router and have put the server into it. (My understanding is that this would put the server straight into the web...) But the result of the HTTP-Requests is still the same :(

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >