Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 303/500 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

  • Web Application Vulnerability Scanner suggestions?

    - by Chris_K
    I'm looking for a new tool for the ol' admin toolkit and would value some suggestions. I would like to do some "automated" testing of handful of websites for XSS (cross site scripting) vulns, along with checking for SQL injection opportunities. I realize that an automated tool approach isn't necessarily the only or best solution, but I'm hoping it would give me a nice start. The sites I need to scan cover the range in stacks from PHP / MySQL to Coldfusion, with some classic ASP and ASP.NET mixed in for good measure. What tools would you use to scan for Web application vulns? (Please note I'm focusing on the web apps directly, not the servers themselves).

    Read the article

  • Distributed computing for a company? Is there such a 'free' thing?

    - by Jakub
    I am new to the whole distributed computing / cloud thing. But I had an idea at work for our multimedia stuff like movie encoding / cpu intensive things tasks (which sometimes take a few hours). Is there a 'free' (linux?) way to go about using a Windows machine, and offsetting those cpu cycles for that task to say 10 servers that are generally idle (cpu wise)? I'm just curious if there is a way to do this or am I just grasping at straws here. My thought is that a 'cloud' setup would achieve this, however like I stated initially, I am a total newbie when it comes to it. This is just an idea, looking for some thoughts? Anyone achieve this?

    Read the article

  • Multi Thread Rsync Transfer

    - by reefine
    For some reason when running a single rsync command I am getting 1 MB/sec to 2 MB/sec even when I connecting 2 servers both connected to 1 Gbps ports. rsync -v --progress -e ssh /backup/mysqldata/mysql-bin.000199 [email protected]:/secondary/mysqldata/mysqldata/mysql-bin.000199 I have over 800 GB of data to transfer split among 500 or so files all starting with: mysql-bin.000* I've found that running 25-30 rsync simultaneously from seperate SSH windows gets me upwards of 25 MB/sec but it will take me hours to run these all manually. Is there anyway to get the 25 MB/sec from a single rsync command?

    Read the article

  • Amazon EC2 SQL Server Connection

    - by cnxmax
    I have two instances running on Amazon AWS EC2. One is running MSSQL Server 2005, the other is running a web application. I CAN connect to the database in my app using a connection string that references the Public IP of my EC2 instance running SQL Server. I CANNOT connect from the web app server if I change the connection string to reference the database servers Private IP Address. But I can connect if I run that same code on the database server itself. I can remote desktop from the app server to the database server using the private IP. I have a feeling there is something in my SQL Sever configuration that is preventing this remote connection. I have remote connections enabled, I have it set to listen on all IP addresses. Any ideas? Other things I've done: - Added exceptions to Windows Firewall - Tried connecting to using EC2 DNS Names

    Read the article

  • Problems with connecting to ASM instance

    - by Rodnower
    I have Oracle11 with RAC installed on RedHat 5. I have two servers with two instances one on each one. In each server I succeed connect to appropriate instance of database but not to instance of ASM. I connect with user oracle11 and type: export ORACLE_SID=+ASM1 sqlplus "/ as sysdba" It connects but write: Connected to an idle instance and when I try to access to parameters or views I have errors. For know that +ASM1 is the SID I type: ps aux | grep pmon and get: asm_pmon_+ASM1 I tried also with +ASM but it was also unsuccessfully. What is wrong here?

    Read the article

  • Why does the Java VM process eat up more RAM then specified in -Xmx parameter?

    - by evilpenguin
    I have multiple servers running CentOS 5.4 and only one application running on Java VM. I've configured the Java VM with the following arguments: java -Xmx4500M -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1024m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true The machines I'm running the VM on has 6 GB RAM and no other applications running. After a while, the java process starts to hit the swap space really hard, I get this info out of the top command: 7658 root 25 0 11.7g 3.9g 4796 S 39.4 67.3 543:54.17 java On the other hand, if I connect via JConsole, it reports the Java VM has 2.6 GB used, 4.6 GB commited and 4.6 Gb max. java -version returns: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Why is the Java VM expanding so much past it's allocated heap size? And where does that memory go, if it's not reported in JConsole?

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • Prevent IIS7 HTTPS from binding to all SSL IP addresses

    - by robpaveza
    I've had this interesting problem with IIS7. I have a number of HTTPS sites in IIS7. That hasn't been a problem, until I wanted to go and set up VisualSVN Server using an SSL certificate. The installer had trouble starting the service. When I looked in the event log, the error was that "the file is already in use by another process." I figured that the "file" was really a socket, and checked with netstat - even though IIS was only bound to three specific IP addresses (.160, .156, and .168) with port 443, it was consuming *:443. I could stop the World Wide Web Publishing Service, start VisualSVN, and then start IIS, but then none of my SSL servers would start. Any helpful hints about how I could make IIS not try to default-bind to *:443? Thanks!!

    Read the article

  • How To Perform Distributed Website Monitoring?

    - by cballou
    I would like to know how sites like the following perform distributed website monitoring (from multiple checkpoints/countries). pingdom.com, site24x7.com, uptrends.com, siteuptime.com, etc, etc. To be exact, what process would occur in checking if a given domain name went down? If the server finds that the site is down, what is the next step? Would it make a REST API request to a separate server to run the same test and report the results? I have a few theories, including: utilizing host(s) from different countries utilizing proxies from different countries I'm looking for the most proper or correct way to handle this, which can include the usage of servers from multiple countries/hosts.

    Read the article

  • VirtualBox - Mac OSX host Win7 guest - no Internet access for guest VM

    - by nodelayheehoo
    I have a Mac running OSX 10.9.2, and I just downloaded and installed a Win7 IE9 VM in VirtualBox. My Mac uses Wi-Fi for internet access, and it's behind a proxy (it's a work machine). VirtualBox loads the VM fine, and at some point the VM can see the DNS servers of the host. But I've never been able to make the VM have internet access. I've tried all kinds of combinations of Network settings on the VM via the VirtualBox Settings, in conjunction with Internet Sharing in OSX's System Preferences, but no luck. Has anyone done a similar setup and made the VM successfully connect to the Internet? Thanks in advance for any inputs. [ Update: I was able to get internet access for the VM when the host was using my home network. When I ran the VPN software to connect to the work network, the internet access went away again.] (Initially posted this on stackoverflow.com, but it was put on hold as off-topic by several users, and was advised to ask here instead)

    Read the article

  • Application Request Routing (ARR) - Single Server Reverse Proxy(ish) Setup

    - by Justin
    I have 1 webserver that has two .NET apps running on it. These are set up on the server as app1.mydomain.com and app2.mydomain.com. I would like to be able to take any request going to app1.mydomain.com/subfolder and rewrite it to app2.mydomain.com/subfolder using ARR. I am having difficulty getting this to work on a single server, and all the ARR examples on the net seem to imply that I require another server dedicated to ARR sitting in front of the two web servers. Is what I am attempting to do possible on one web server, and if so how?! Thanks all.

    Read the article

  • How to create a Linux Media Server using Ubuntu?

    - by Thomas
    Hello all: I'm an intermediate Linux user and a relative beginner to servers. I would like some help finding resources on setting up a basic server. I have Googled, and am a member of the Ubuntu Forums, but just figure it can't hurt to ask the Stack Overflow community for help as well. I plan on installing on an old laptop (Lenovo Thinkpad R61i or Toshiba Satellite A105). I have downloaded the latest Ubuntu (9.10) but don't know how to do any of the configurations. I just want a server to store my files where I can access (download and/ or stream) from a browser. Any help you can give is greatly appreciated. Thanks! Thomas

    Read the article

  • Sharing Internet Connection using an ad-hoc wifi network

    - by Apps
    I've installed a WiFi Adapter in my Windows XP PC and created an ad-hoc network. I am able to connect to the network through my iPod Touch. On the same PC I have a LAN connection to the Internet. I need to share this internet connection to my iPod too. The problem is Windows did not assign an IP Address (even though assign IP address automatically is selected) to this WiFi network. When I tried to share the Internet connection, I got a message that LAN Network Adapter's IP address will be changed to 192.168.1.1. But if this happens I will not be able to connect to other devices/servers in my LAN Network. How do I share the Internet connection through WiFi?

    Read the article

  • Memcached failover

    - by user25164
    We have 2 memcached servers configured and use the Eniym client. When one of the server is down, it appears this server is added to the deadServers list (ServerPool.cs) and tries to resurrect the server every 10seconds (we have configured deadTimeOut to be 10seconds). Attempting to connect to the failed server causes a TCP timeout, the pages take a long time to load which results in bad user experience. 1) What is the standard way of resolving this issue? There are some posts about removing the server from the deadServers list. Is it okay to do this? 2) What is the recommended deadTimeOut setting (I understand by default it's 2 mins and we've changed it to 10seconds in our implementation) 3) Am I correct in my understanding that the cached data is not replicated across Server 1 and Server 2? If Server 1 is down, then it goes to the database to fetch these values (and it doesn't really check Server2)? Any help is really appreciated.

    Read the article

  • How can I move a load of zone records from a web based system to text based one?

    - by Chris Adams
    Hi there, I have a few domains with Dreamhost where I have set a load of records using their web based domain name system, and I've like to move them to another provider that lets me enter info directly as a text file for their name server, bind 9 to use. (If you're interested, I'm moving them to Gandi.net). Previously when I used a cpanel based system to do something similar, there was a tool that let me simply enter a domain name, and any available domains were automagically entered into a system, saving me typing it myself (and bringing down sites with silly typos in the process). What open source tool can I use to query a domain for all the relevant subdomains and records and list them in a format like a zone file, that I can use with other name servers?

    Read the article

  • Why am I unable to turn off recursion in ISC BIND?

    - by nbolton
    Here's my named.conf.options file: options { directory "/var/cache/bind"; dnssec-enable yes; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; # disable recursion recursion no; }; I've tried adding allow-recursion { "none"; } before recursion but this also has no effect; I'm testing it by using nslookup on Windows, and using google.com. as the query (and it returns an IP, so I assume recursion is on). This issue occurs on two servers with similar setups.

    Read the article

  • VLC Media Server

    - by Josh
    We are using VLC on ubuntu, and trying to set up a streaming media server. We have the http interface working fine from remote computers, and we can also see the video playing as text if we don't screen VLC. Our problem is the output streaming. When we use the main VLC page you get when you goto the servers IP it does not save the output MRL (refreshing page it will go away, even after clicking save.) We tried to VLM page and it appears to work fine from the http page (it buffers, plays, timers go up when not paused, etc.) However, we still cannot connect remotely with a VLC client. The output parameters do save properly on the VLM page. We are noobs when it comes to this. Does anyone have a very to the point procedure of getting a file X to play and stream on ubuntu using VLC assuming VLC is installed?

    Read the article

  • Passing all traffic through Cloudflare

    - by Nick
    I am new to Linux System Administration and I am experimenting with iptables trying to learn how to really lock down a system with them. And one thing a friend of mine recommended was that there was a way to pass all incoming traffic through Cloudflare so even if attackers resolved the server ip they still couldn't (D)dos it directly. This is exactly what they said: "Simply config your servers iptables to only allow incoming connections from CloudFlares IP ranges then set it to allow only your IP/IP range to connect on port 21 (SSH)" Could someone help me on what command I'd need to run for Ubuntu to get this effect?

    Read the article

  • Remote logging for multiple Apache virtual hosts using syslog-ng

    - by James
    I'm running a couple Apache web servers that each have 4-8 separate virtual hosts on each of them. I'm trying to setup a dedicated log server that stores each virtual host access and errors logs in a separate directory for that virtual host. For example on the logging server, /var/log/remove/10.0.0.2/virtualhost1 contains access_log and error_log /var/log/remove/10.0.0.2/virtualhost2 contains access_log and error_log /var/log/remove/10.0.0.3/virtualhost3 contains access_log and error_log and so on... Right now I have it split up by host but I can't figure out how to do it additionally by virtual host. Here are the relevant lines from the logging server's syslog-ng.conf source r_src { tcp(ip("0.0.0.0") port(5140)); }; destination r_all { file("/opt/splunk/logs/$HOST"); }; log { source(r_src); destination(r_all); }; Any help would be appreciated. Thanks!

    Read the article

  • Best practice? Using DPM to backup VMs within each VM or through the host?

    - by andrew
    We've got two Hyper-V hosts running multiple VMs (all flavors of Windows Servers). One of the VMs is running MS Data Protection Manager 2010, which runs beautifully (most of the time) and is connected to a separate NAS via iSCSI for the DPM storage. I noticed when I installed the DPM agent on the Hyper-V hosts, it enumerates the VMs in the DPM Protection listing. I don't want to burn through my storage space too fast with duplicate protection, so I was wondering: Is it recommended to back up VMs through the host, or is it better to install the DPM agent on each VM and backup as I would any other machine? It would seem as though most people (currently including me) do it the second way, but is there any advantage to including the entries under HyperV (Backup using Child Partition Snapshop)?

    Read the article

  • Back up of Streaming server

    - by Maxwell
    I want to take a new streaming server for my website which generally holds videos and audio files. But how do we maintain backup of the streaming server if storage size is increasing day by day. Generally on Database servers, like Sql Server, backups can be easily taken and restored very easily as they do not occupy much space for medium range applications. On the other hand how can we take backup of streaming server? If the server fails, the there should be an alternative server / solution that should decrease downtime of the server. How is the back-end architecture of YouTube built to handle this?

    Read the article

  • Inconsistent DNS report results on different websites

    - by Saif Bechan
    I am checking my server dns setting on different websites. intodns,dnssy,dnscog, but on all the websites i get different results. Some say my mail server settings are not good, some say i have no A records for my NS records. Only on dnssy everything looks to be ok. Should i just trust the website with the best results, or what should i do? How does this sort of inconsistency occur in the first place. I am new to servers and dns, and this makes it very misleading.

    Read the article

  • apache is up but does not read requests

    - by bosh
    This usually happens a few minutes after restarting apache: httpd daemons are up, but are not reading the requests from the sockets. The web clients just wait forever on the connection. When I run netstat, the Recv-Qs are showing a positive byte count which does not change. So basically the connection between the client and apache is in the CONNECTED state but no progress is made. Restarting apache solves the problem for a couple of minutes, but then it's deja vu all over again. Other servers (sshd, ftpd, etc) are fine. Where should I look further? Any clue? Thanks!

    Read the article

  • WCF Communication Problem

    - by vincpa
    Two separate servers, one is an IIS7 web application trying to connect to a WCF service on the other server. The initial connect always fails, sometimes the second, after that, everything works normal. What could be the cause of this problem? The exception that gets thrown is EndpointNotFoundException Could not connect to net.tcp://192.168.0.83/MgrService/Manager.svc. The connection attempt lasted for a time span of 00:00:21.0289348. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.0.83:808.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >