Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 314/394 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Apche ssl is not working

    - by user1703321
    I have configure virtual host on 80 and 443 port(Centos 5.6 and apache 2.2.3), following is the sample, i have wrote the configuration in same order Listen 80 Listen 443 NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . </VirtualHost> then i have define 443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.be.crt SSLCertificateKeyFile /etc/ssl/private/abc.be.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_be.crt </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.fr.crt SSLCertificateKeyFile /etc/ssl/private/abc.fr.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_fr.crt </VirtualHost> First ssl certificate for abc.be is working fine, but 2nd domian abc.fr still load first ssl. following the output of apachictl -s VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.abc.be (/etc/httpd/conf/httpd.conf:1071) port 443 namevhost www.abc.fr (/etc/httpd/conf/httpd.conf:1071) Thanks

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Windows Server 2008 32 bit & windows 7 professional SP1

    - by Harry
    I'm testing my new Windows Server 2008 32 bit edition (2 servers) as a server and Windows 7 professional 32 bit as a client. Let say one is a primary domain controller (PDC) and the other is a backup domain controller (BDC) like the old time to ease. Every setup were done in the PDC and just replicate to BDC. Didn't setup anything, just install the server with AD, DNS, DHCP, that's all. Then I use my windows 7 pro 32 bit to join the domain. It worked. After that I tried to change the password of a the user (not administrator) but it always failed said it didn't meet the password complexity setup while in fact there's no setup at all either in account policy, default domain policy or even local policy. Tried to disable the password complexity in the default domain policy instead of didn't set all then test again but still failed. Browse and found suggestion to setup the minimum and maximum password age to 0 but it also failed. Tried to restart the server and the client then change password, still failed with the same error, didn't meet password complexity setup. Tried to see in the rsop.msc but didn't found anything. In fact, if I see the setup in another system with windows server 2003 and windows xp, using rsop.msc I can see there's setup for computer configuration windows settings security settings account policies password policy. I also have a windows 7 pro 32 bit in a windows server 2003 32 bit environment but unable to find the same setting using rsop but this windows 7 works fine. anyone can give suggestion what's the problem and what to do so I can change my windows 7 pro laptop password in a windows server 2008 environment? another thing, is it the right assumption that we can see all the policies setting in windows 7 whether it's in a windows server 2003 or 2008 environment? thanks.

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • can't figure out why apache LDAP auth fails

    - by SethG
    Suddenly, yesterday, one of my apache servers became unable to connect to my LDAP (AD) server. I have two sites running on that server, both of which use LDAP to auth against my AD server when a user logs in to either site. It had been working fine two days ago. For reasons unknown, as of yesterday, it stopped working. The error log only says this: auth_ldap authenticate: user foo authentication failed; URI /FrontPage [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server], referer: http://mysite.com/ I thought perhaps my self-signed SSL cert had expired, so I created a new one for mysite.com, but not for the server hostname itself, and the problem persisted. I enabled debug-level logging. It shows the full SSL transaction with the LDAP server, and it appears to complete without errors until the very end when I get the "Can't contact LDAP server" message. I can run ldapsearch from the commandline on this server, and I can login to it, which also uses LDAP, so I know that the server can connect to and query the LDAP/AD server. It is only apache that cannot connect. Googling for an answer has turned up nothing, so I'm asking here. Can anybody provide insight to this problem? Here's the LDAP section from the apache config: <Directory "/web/wiki/"> Order allow,deny Allow from all AuthType Basic AuthName "Login" AuthBasicProvider ldap AuthzLDAPAuthoritative off #AuthBasicAuthoritative off AuthLDAPUrl ldaps://domain.server.ip/dc=full,dc=context,dc=server,dc=name?sAMAccountName?sub AuthLDAPBindDN cn=ldapbinduser,cn=Users,dc=full,dc=context,dc=server,dc=name AuthLDAPBindPassword password require valid-user </Directory>

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • How to configure TFTPD32 to ignore non PXE DHCP requests?

    - by Ingmar Hupp
    I want to give our Windows guy a way of easily PXE booting machines for deployment by plugging his laptop into one of our site networks. I've set up a TFTPD32 configuration which does just that, and our normal DHCP server ignores the PXE DHCP requests due to them having some magic flag, so this part works as desired. However I'm not sure how to configure TFTPD32 to only respond to PXE DHCP requests (the ones with the magic flag) and ignore all normal DHCP requests (so that the production machines don't get a non-routed address from the PXE server). How do I configured TFTPD32 to ignore these non-PXE DHCP requests? Or if it can't, is there another equally easy to use piece of software that he can run on his Windows laptop? Since the TFTPD part is working fine, a DHCP server with the ability to serve PXE only would do. Worst case I'll have to set up a virtual machine with all this, but I'd much prefer a small, simple solution. I'm not interested in solutions that involve using the existing DHCP servers or separating machines on the network for deployment, the whole point is to be simple and stand-alone.

    Read the article

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • How can I remotely tell what brand/model internal SCSI card is installed in a machine?

    - by edmicman
    I am doing some consulting work for a previous employer upgrading and migrating old servers to new hardware. There is an existing file server (HP ProLiant DL380) that has an tape backup drive connected; it is using a SCSI interface and I'm pretty sure it's using an internal SCSI card. They are upgrading to a new server hardware (HP ProLiant DL160 G6). The old server is 2U, the new one 1U and we would want to move the tape drive to the new server, too. I'm trying to figure out if the SCSI card in the old server would be able to be installed in the new one or if we'll need to source a new card; mostly I don't know for sure the height of the card and if it's low-profile enough that it would fit in the new server. There is not much of a technical resource onsite and the old server is in-use anyway so I would like to avoid making a trip in myself or trying to have someone onsite pop open the case and tell me what card is there. It's running Windows Server 2003 - is there a way to tell from say Device Manager what make and model the SCSI card might be? Or any other system diagnostic program or something that would give me hardware info like that? Thanks for any info!

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >