Search Results

Search found 1575 results on 63 pages for 'scott clements'.

Page 18/63 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • outbound ftp on server 2008 r2 stalls

    - by Scott Kramer
    the built in command line ftp client in server 2008 does not support passive mode so I've used these commands to allow outbound ftp (it stalls without this) 1) Open port 21 on the firewall netsh advfirewall firewall add rule name="FTP (no SSL)" action=allow protocol=TCP dir=in localport=21 2) Activate firewall application filter for FTP (aka Stateful FTP) that will dynamically open ports for data connections netsh advfirewall set global StatefulFtp enable however in server 2008 r2, these commands seem to work, but it does not affect the outbound ftp, it stalls I do not want to use an alt client

    Read the article

  • Install PHP 5.1.2, Requires: libcurl.so.3()(64bit) error

    - by Scott Rowley
    I'm trying to install php 5.1.2 on a CentOS 6 server (for grandfathering in old websites). I downloaded an RPM file ( php-5.1.2-5.x86_64.rpm ), but when I use: yum install php-5.1.2-5.x86_64.rpm I get the following error: Error: Package: php-5.1.2-5.x86_64 (/php-5.1.2-5.x86_64) Requires: libcurl.so.3()(64bit) I have tried several things including the following: ln -s /usr/lib64/libcurl.so.4 /usr/lib64/libcurl.so.3 (To make it symlink to the newer version) Downloaded curl-7.15.5-2.1.el5_3.5.x86_64.rpm and took the libcurl.so.3 out of the rpm and placed it in /usr/lib64/libcurl.so.3 with the same permissions as libcurl.so.4. Nothing has worked. Any ideas?

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Local dedicated hosting space (own hardware)

    - by Scott
    Where can I find local dedicated hosting space for my own hardware? I know I can rent dedicated hosting from various companies online, but usually I think that means I'm renting their hardware too. I just need a space with a network connection and a power outlet. That's it. How much would this cost? What would I search for? Is it available easily? Or would it only be the sort of thing huge companies would do? I'm in the greater NYC area. It's for a project I'm working on, but the thing's loud and annoying. I'd be willing to pay a little to get it out of sight and out of mind. I don't even care too much about the quality of the network connection. I'd rather not rent other people's hardware cause it probably would cost a fortune to rent a machine like this (tons of ram).

    Read the article

  • Why would searching in Websphere Portal Administration exclude some results?

    - by Scott Leis
    I have logged into a website based on Websphere Portal 6.0, gone to the Administration console, then to Portlet Management - Portlets. At a guess, there are about 200 portlets on this server. 22 portlets have a title starting with "IGM", but if I use the "Title starts with" search option and enter "igm" (or any case variation), only one of these portlets is found. The portlets excluded from the "Title starts with" search are also excluded from the "Title contains" search, but some of them can be found with the "Unique name contains" search (noting that only 5 of these have a unique name). Why would title searches exclude these portlets? I also see similar behaviour in other areas of the Portal administration. E.g. going to Portal Settings - Custom Unique Names - Pages, and performing title searches excludes some results, depending on the search terms entered.

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • How to add "most recent emails from this user" to Gmail inbox as a sidebar

    - by Scott B
    I use and love gmail. However, since i use email for customer support, I'm always doing a cross reference lookup via the search feature to see my past conversations with the person whose email I'm reading. I'd love to have a right sidebar widget that shows me, for any email I choose to read, the list of previous conversations/emails with that person. Is this possible? I'm using Chrome Ideally, this sidebar would bump or replace the contextual ads that now display over there.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • load balancing two web servers each on two different isp's?

    - by Scott
    I have two ISP's that provide me hosting via apache / php / mysql. I am running drupal on them. On occasion the mysql server will go away (crash), so I was hoping to find a reasonable way to have a fail over, if server A SQL is down, all traffic is sent to server B. I know traditionally this is handled in DNS where a second alternate ip is given if there is a problem - or similar. But I do not have control over the isp, other than I can run php, perl and the usual apache stuff. Also, I have static ip's on each isp, and I can create dns entries (A/CNAME/TXT). So, I was hoping there might be a way for me to have a script that checks if drupal has a problem, and if so, somehow alter dns, or ? Or, any other ideas? (other than spending lots more $ on a better isp)

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • How do I launch a process as a specific user at startup on OS X?

    - by Scott Bonds
    I would like to run a script as a particular user on startup (not on login). I thought a launchd LaunchDaemon would do it, but 'man launchd' says: "If you wish your service to run as a certain user, in that user's environment, making it a launchd agent is the ONLY supported means of accomplishing this on Mac OS X. In other words, it is not sufficient to perform a setuid(2) to become a user in the truest sense on Mac OS X." They aren't kidding--when I try to run my script as a LaunchDaemon it doesn't work. In particular I'm trying to automate some keychain operations using the 'security' command, and it won't let me change the default keychain when I run the script through LaunchDaemon, though the script works fine when run using sudo from a shell. A LaunchAgent won't work, because the goal is for the proces to run without a user logging in and LaunchAgents only run when someone logs in. I looked at cron and the @reboot directive and that looks promising, but I read that cron is deprecated on OSX.

    Read the article

  • Does AWS resolve same-datacenter hostnames to 10.* addresses for different customer accounts?

    - by Scott Ritchie
    If I bring up two Amazon EC2 instances and run nslookup on one for the other's hostname, amazon will return a 10.* address. This is routable within amazon, and works just fine. But does this work between different accounts? If I use one of my nodes to nslookup a hostname belonging to another customer (but still in the same datacenter) will it resolve as a 10.* address or will it give the standard public IP?

    Read the article

  • Windows (7 & Vista) laptop monitor does not come back after closing lid

    - by Scott Vercuski
    I'm experiencing an issue with Windows Vista (and now Windows 7) with my laptop. When I close the laptop lid the monitor goes blank but will not come back on after I re-open the laptop. The screen stays blank and nothing that I do will get it to come back. The laptop is an HP DV9000 series. Has anyone else ran into this issue? One of the solutions I've seen online is to go into the device manager and replace the lid driver with something nonsensical (the website suggested pointing the driver at the sound recorder). This does solve the problem by disabling the lid but doesn't really resolve the root issue. I'm asking if anyone has any method I can use to debug what's going on. How do I tell if it is an operating system issue vs. a malfunction within the lid itself. I'd actually like the lid to function as it's meant to. Thank you!

    Read the article

  • Windows Proxy Server advice

    - by Scott
    I have a webserver that currently has about 10 IP addresses. I have various clients that require a proxy server to route their internal traffic through. The load is not that great, so I'd like to have this ONE server act as a proxy server for 10 different clients, each client having their own unique IP on the server. The hardware is already setup, but I'm wondering what software solutions you guys recommend? I've looked at WinGate, Squid-Proxy, etc...but am pretty green with this. Maybe there's even a way to have Windows do this natively? I'm running Windows Server 2008, 32 bit.

    Read the article

  • Windows 7 Permissions

    - by Scott
    I have an odd problem with a windows 7 laptop. It's a single user installation currently. This is a fresh install on an Asus laptop. I have a svn repo checked out on my second partition. I have a directory which I have added to svn:ignore list, because it is for tmp files. This specific directory shows as read-only. I need write access on this directory for my project to function properly. If I right click and modify the directory to be not be read only and run this recursively, it simply is immediately reverted back to a read-only directory. I have also modified apache's service to run as myself to no avail. I'm stumped... Any ideas?

    Read the article

  • how to change the default open-with program to a program on the second disk

    - by Scott????
    I have a 250GB HDD for my system and a 60GB SSD using a SATA port. I installed most of my applications on the SSD. There's a strange thing though. I can not change the default open-with program to a program which is on the SSD. I think it may be caused by permission so I gave my user a 'full control' permission on the security tab in disk properties. But changing permissions is not work. After I choose an application (I've tried Notepad++, Sublime, 7Zip, etc.), nothing is added in the below window: Also, if I install 7Zip on my machine, the right click menu items can not be added.

    Read the article

  • mod_rewrite for specific domains in a mappings file

    - by scott
    I have a bunch of domains that I want to go to one domain but various parts of that domain. # this is what I currently have RewriteEngine On RewriteCond %{HTTP_HOST} ^.*\.?foo\.com$ [NC] RewriteRule ^.*$ ${domainmappings:www.foo.com} [L,R=301] # rewrite map file www.foo.com www.domain.com/domain/foo.com.php www.bar.com www.domain.com/domain/bar.com.php www.baz.com www.domain.com/other/baz.php.foo The problem is that I don't want to have to have each domain be part of the RewriteCond. I tried RewriteCond %{HTTP_HOST} ^www\.(.*) RewriteRule (.*) http://%1/$1 [R=301,L] but that will do it for EVERY domain. I only want the domains that are in the mappings file to redirect, and then continue on to other rewrites if it doesn't match any domains in the mappings file.

    Read the article

  • What is the best way to back up dedicated web server? (Amanda versus Rsync)

    - by Scott
    Hello everyone, I am trying to establish valid back ups for my web server. It is a linux box on CentOS. I have asked around and "rsync" was suggested by some of the server fault community. However, my coworker at work says that this is really only moving over the physical files and isn't really a usable "snapshot." He suggested using "amanda" and that this did full server snapshots that are more what I am accustomed to. I know at my company we have virtual machines that we take snapshots of and we can restore everything back to just as they were with little effort and little downtime. Is this possible with rsync? Or would I need to create a new server and then migrate the files back and do various configurations? I think I prefer being able to just reset everything to a point in time. Forgive my ignorance, Back ups are something that I have never really had to worry about before.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >