Search Results

Search found 15712 results on 629 pages for 'location href'.

Page 505/629 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • CUPS causes printer to click and doesn't print

    - by Pez Cuckow
    I'm suffering a strange problem with my Cannon iP4850 when trying to use CUPS on a Raspberry Pi (this is not RPi specific, please do not vote to move it). When I plug the printer into my Laptop (OSX) or my Desktop W7 it identifies as a iP4800 and prints perfectly. So I plug it into the Pi (running debian), set it up in CUPS enable sharing and can now see the iP4800 series shared on the network. However if I print to it (using AirPrint etc...); the file gets to CUPS safely (shows in the queue) but when it tries to print the printer clicks (like a loud thunk) 3/4 times and then gives in, with a double amber flashing light. In cups it shows as job completed. Do you know why using the pi and cups would cause what appears to be a hardware fault and what I can do to fix the problem or to provide further debug info? Thanks for your time! Description: Canon iP4800 series Location: Lounge Driver: Canon PIXMA iP4800 - CUPS+Gutenprint v5.2.9 (color, 2-sided printing) Connection: usb://Canon/iP4800%20series?serial=2239B2 Note: I've tried deleting and re-adding the printer to the Laptop, Desktop and PI and the results are always the same Log for plugging in printer and printing (attempting to) something until the printer turned off again pi@pezpi /var/log $ dmesg [ 7284.176336] usb 1-1.2: new high speed USB device number 8 using dwc_otg [ 7284.279703] usb 1-1.2: New USB device found, idVendor=04a9, idProduct=10d5 [ 7284.279750] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7284.279771] usb 1-1.2: Product: iP4800 series [ 7284.279786] usb 1-1.2: Manufacturer: Canon [ 7284.279800] usb 1-1.2: SerialNumber: 2239B2 Setting cups to verbose: Change loglevel in cupsd.conf to debug (or debug2) pi@pezpi /var/log $ sudo vim /etc/cups/cupsd.conf pi@pezpi /var/log $ sudo /etc/init.d/cups restart [ ok ] Restarting Common Unix Printing System: cupsd. pi@pezpi /var/log $ Log from $ /var/log/cups/error_log is at http://pastebin.com/7VZMRMrG (too large to post here) The log contains - in order (deleted the log and then did the beneath) Restarting the cups server Attempting to print a test page x2 Printing from 192.168.1.90 via AirPrint Printing from 192.168.1.90 via Network Print Turning the printer off and on again

    Read the article

  • Help me please with this error

    - by Brandon
    I setup IIS. I moved my folder with all the files to the IIS directory. Now when I go to http://localhost/thefolder I get: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)] Luxand.FSDK.ActivateLibrary(String LicenseKey) +0 FaceRecognition._Default.Page_Load(Object sender, EventArgs e) in D:\Project Details\Layne Projects\DotNet Project\FaceRecognition\FaceRecognition\Default.aspx.cs:60 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • Can not access SQLServer database

    - by btrey
    I'm trying to convert an Access database to use a SQLServer backend. I've upsized the database and everything works on the server, but I'm unable to access it remotely. I'm running SQLServer Express 2005 on Windows Server 2003. The server is not configured as a domain controller, nor connected to a domain. The computers I'm trying to access the server from are part of a domain, but there are no local domain controllers. I'm at a remote location and the computers are configured and connected to the domain at the home office, then shipped to us. We normally log in with cached credentials and VPN into the home office when we need to access the domain. I can use Remote Desktop Connection to access the 2k3 server which is running SQLServer. If I log into the server with my username, I can bring up the database, access it via the Trusted Connection, and the database works. If I try to run the database locally, however, I get the Server Login dialog box. I can not use a Trusted Connection because my local login is to the home office domain and is not recognized by the SQLServer machine. If I try to use the username/password that is local to the SQLServer, I get a login failed error. I've tried entering the username as "username", "workgroup/username" (where "workgroup" is the name of the workgroup on the SQLServer), "sqlservername/username" and "[email protected]" where "1.2.3.4" is the IP of the SQLServer. In all cases, I get a login failed error. As I said, I can login to the server via Remote Desktop Connection with the same username and password and use the database, so permissions for the username appear to be correct for both a remote connection and for database access. Not sure where to go from here and any assistance would be appreciated.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Using GUI ftp on Win7 and Vista without additional software

    - by Stephen Jones
    Goal: provide a 'no-software' method for 'less technical' users to access password protect ftp location from Win7 and Vista (existing approach for WinXP works). 'No software' method to mean without installing additional software (e.g. FileZilla, WinSCP) - the solution is supplied to external non-technical users. WinXP (works): Using Windows Explorer, WinXP supports non-technical ftp access by pasting: ftp://username:[email protected] into the address bar. The remote ftp site's files / directory structure becomes available and can be copied to / from easily (in the style of local file copy / paste) by a 'less technical' user. Win7 / Vista (doesn't work): Pasting the same URL into the Windows Explorer on Win7 or Vista causes an error: An error occurred opening that folder on the FTP server. Make sure you have permission to access that folder. Details: The connection with the server was reset. Notes: a) The same username/password/server typed from the (DOS) command line achieves access to the server, but this is a more 'technical' solution than desired. I am looking for a WinXP equivalent solution. b) Under 'Control Panel' / 'Internet options' / 'Advanced' tab - the boxes for 'Enable FTP folder view' and 'Use Passive FTP' are ticked (enabled) c) Adding an inbound firewall rule for local port 20 (TCP) was attempted with no difference in results (i.e. failure)

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • Postfix unable to find local server

    - by Andrew
    I'm working with postfix on fedora 9 and I'm attempting to make some changes to a system setup by my predecessor. Currently the postfix server on [mail.ourdomain.com] is setup to forward mail sent to two addresses to another server for processing. The other server [www01.ourdomain.com] receives the email and sends it to a PHP script to be processed. Then that PHP script generates and sends a response to the user who sent the original email. We're adding more web servers to the system and as a result we've decided to move these processing scripts to our admin [admin.ourdomain.com] server to make them easier to keep track of. I've already setup and tested the processing scripts on [admin.ourdomain.com], and on the mail server doing the forwarding [mail.ourdomain.com] I added [admin.ourdomain.com] to /etc/hosts and also added another, aside from the one for [www01.ourdomain.com], entry to /etc/postfix/transport for [admin.ourdomain.com]. I also restarted postfix as well. I've tested the communication from [mail.ourdomain.com] to [admin.ourdomain.com] using telnet and the [admin.ourdomain.com] domain and everything runs correctly. But as soon as I change the forward address and attempt to send an email to the mail server I get a bounce message stating "Host or domain name not found. Name service error for name=admin.ourdomain.com type=A: Host not found". If I change the forward settings back to [www01.ourdomain.com] then everything works fine. Is there some setting I'm missing in Postfix? The server itself and telnet work fine it just seems to be postfix that's not able to discover the location of [admin.ourdomain.com].

    Read the article

  • How do I make Nginx redirect all requests for files which do not exist to a single php file?

    - by Richard
    I have the following nginx vhost config: server { listen 80 default_server; access_log /path/to/site/dir/logs/access.log; error_log /path/to/site/dir/logs/error.log; root /path/to/site/dir/webroot; index index.php index.html; try_files $uri /index.php; location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME /path/to/site/dir/webroot$fastcgi_script_name; include /path/to/nginx/conf/fastcgi_params; } } I want to redirect all requests that don't match files which exist to index.php. This works fine for most URIs at the moment, for example: example.com/asd example.com/asd/123/1.txt Neither of asd or asd/123/1.txt exist so they get redirected to index.php and that works fine. However, if I put in the url example.com/asd.php, it tries to look for asd.php and when it can't find it, it returns 404 instead of sending the request to index.php. Is there a way to get asd.php to be also sent to index.php if asd.php doesn't exist?

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • Windows Server 2008 R2 - Cannot Change DNS Domain Context on Some Machines

    - by Richie086
    So I have a small Windows Server 2008 R2 network consisting of a domain controller, a file server, sql server, etc. All machines are joined to a windows domain (CPUSHIELD.COM) and show up in Active Directory Users and Computers under the Computers OU. Each computer has a DNS record as well that was populated when I joined each computer to the domain. However, when I go to my SQL server VM (which is joined to CPUSHIELD.COM) and try to add domain users or groups to the local users or groups on my file server (which is a physical machine) or my sql server (which is a virtual machine), for some reason I cannot change the context to the CPUSHIELD.COM domain.. For example: Here is the really strange thing, I have two other servers on my network that do show CPUSHIELD.COM in the From This Location field (as I would expect with any machine joined to a domain) and I am able to search the local machine and/or domain for users/groups to add. I have done hundreds of Windows Server 2008 installs and this is the first time I have run into this issue. Any ideas? Let me know if you need more info

    Read the article

  • Cisco ASA Site-to-Site VPN Dropping

    - by ScottAdair
    I have three sites, Toronto (1.1.1.1), Mississauga (2.2.2.2) and San Francisco (3.3.3.3). All three sites have ASA 5520. All the sites are connected together with two site-to-site VPN links between each other location. My issue is that the tunnel between Toronto and San Francisco is very unstable, dropping every 40 min to 60 mins. The tunnel between Toronto and Mississauga (which is configured in the same manner) is fine with no drops. I also noticed that my pings with drop but the ASA thinks that the tunnel is still up and running. Here is the configuration of the tunnel. Toronto (1.1.1.1) crypto map Outside_map 1 match address Outside_cryptomap crypto map Outside_map 1 set peer 3.3.3.3 crypto map Outside_map 1 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map 1 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_3.3.3.3 internal group-policy GroupPolicy_3.3.3.3 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 3.3.3.3 type ipsec-l2l tunnel-group 3.3.3.3 general-attributes default-group-policy GroupPolicy_3.3.3.3 tunnel-group 3.3.3.3 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** San Francisco (3.3.3.3) crypto map Outside_map0 2 match address Outside_cryptomap_1 crypto map Outside_map0 2 set peer 1.1.1.1 crypto map Outside_map0 2 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map0 2 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_1.1.1.1 internal group-policy GroupPolicy_1.1.1.1 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 1.1.1.1 type ipsec-l2l tunnel-group 1.1.1.1 general-attributes default-group-policy GroupPolicy_1.1.1.1 tunnel-group 1.1.1.1 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** I'm at a loss. Any ideas?

    Read the article

  • NGINX server_name issues

    - by Unai
    I have the following simple server block on NGINX: server { listen 80; listen 8090; server_name domain.com; autoindex on; root /home/docroot; location ~ \.php$ { include /usr/local/nginx/conf/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/docroot$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } After I include the relevant settings on my hosts file I get the following (unexpected) behavior: http: //domain.com/ and http: //domain.com:8090/ work fine; http: //domain.com:8090/future-cell-phone-technology-01-150x150.jpg works; http: //domain.com/future-cell-phone-technology-01-150x150.jpg - ERROR! "The connection was reset" (note.- added a space after http: to avoid link protection but this is not really promoting anything) I've been troubleshooting (3) for a couple hours and I'm unable to identify the culprit. I'm running NGINX 1.0.10 (latest stable) on Debian 6.0.2 32 bits. This NGINX instance runs another 40 or 50 sites with no problems.

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • Why can't I open file with doubleklick after I moved the application that opens it on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. I moved some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe. The old location was c:\programs\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose open with ... the Open with dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles. I also tried to move the application (with the whole folder) to "c:\program files\app1\app1.exe

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • php_ibm_db2.dll on IIS 7.5 using PHP 5.3 error message

    - by grmbl
    I'm trying to use ibm_db2 extension to access iSeries DB2 database. This is the testcode (taken from here) <?php $database = 'ALI452BFAL'; //library $user = 'STN452'; $password = '**********'; $hostname = 'myserverip'; $port = 50000; $conn_string = "DRIVER={IBM DB2 ODBC DRIVER};DATABASE=$database;" . "HOSTNAME=$hostname;PORT=$port;PROTOCOL=TCPIP;UID=$user;PWD=$password;"; $conn = db2_connect($conn_string, '', ''); if ($conn) { print "ok"; db2_close($conn); } else { echo db2_conn_error() . '<br>' . db2_conn_errormsg(); } ?> I have installed a very basic package containing the db2 driver and added this as an extension. (IBM Data Server Driver for ODBC, CLI, and .NET.msi) This is my result: 08001 [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "10.10.0.120". Communication function detecting the error: "connect". Protocol specific error code(s): "10061", "", "". SQLSTATE=08001 SQLCODE=-30081 Anybody tried this before??

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >