Search Results

Search found 18845 results on 754 pages for 'the machine charmer'.

Page 125/754 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • How to troubleshoot port forwarding on Windows 7 (64 Bit) with ICS enabled?

    - by LearnCocos2D
    I want to forward some ports (1666 for perforce, 8081 for Hudson) on my Internet Gateway machine. This machine is running Windows 7 (64 Bit, legal, user-account) and connected to the Internet via cable modem (it's not a router). The Windows machine is sharing its Internet Connection via ICS and that works fine on all connected computers. I can access the services via the gateway's public IP (95.x.x.x) on the given ports if they are running on the gateway machine itself. I've added the ports and destination IP address (192.168.0.18) in the Internet network adapter's Advanced Settings dialog (Sharing tab). That's the same dialog where you have a list of preconfigured services like HTTP, FTP and other incoming services. When I do that I can't connect to the services anymore. For some reason port forwarding isn't working. I have uninstalled Bitdefender because I wanted to check if the Firewall interferes. I've also disabled the Windows Firewall and Defender to no avail. I tried a freeware tool that helps to setup port forwarding but that didn't work either. The target machine is a Mac OS X computer whose Firewall is disabled. The IP is static. I can successfully connect to the services using the local IP address (192.168.0.18) from two different machines, including the gateway computer. So internally and externally it seems to me that the ports are open and not blocked, and the issue is with port forwarding itself. From what I understand it should be enough to add an entry to the Advanced Settings dialog to enable port forwarding when there are no firewalls interfering. How can I troubleshoot why port forwarding isn't working for me? What steps should I follow to alleviate the issue? PS: I gladly accept command line solutions. Other things I've tried: adding an Inbound Rule to Windows Firewall for the 1666, 8081 ports trying with Windows Firewall enabled and disabled disabling/enabling the network adapter double-checked that the IP addresses are correct mapping a different incoming port to the service's actual port followed or checked the misc tips in this article What I haven't dared trying yet (let me know if it's worth a shot): disable/enable ICS remove all network adapters (via Control Panel), then re-install and re-configure them

    Read the article

  • FTP through HAProxy

    - by Menda
    I have a machine, which is the Host and has HAProxy installed in it and working. Then I have a Guest KVM virtual machine running inside the Host with an IP 192.168.122.152. I installed an FTP server in the Guest machine with VSFTPD. From the Host, if I try the command $ ftp -p 192.168.122.152, works perfectly and I can connect to the Guest FTP. I need to remark that this FTP is configured as passive, but both passive and active connections are working from the Host. This is an extract of part of /etc/vsftpd.conf in the Guest: # Passive mode connect_from_port_20=NO tcp_wrappers=YES listen_address=192.168.122.152 pasv_enable=YES pasv_promiscuous=NO port_enable=YES port_promiscuous=NO pasv_max_port=10000 pasv_min_port=10250 Now it's time to make it accessible from outside, so I configure /etc/haproxy/haproxy.cfg like this: listen FTP_Default *:21 server ftp01 192.168.122.152 check port 21 inter 10s rise 1 fall 2 listen FTP_Range *:10000-10250 server ftp01 192.168.122.152 check port 21 inter 10s rise 1 fall 2 But if I try to connect from other machine in internet $ ftp -p $PUBLICIP, it only responds: Connected to <PUBLICIP>, but it doesn't ask for the login and password. Something in the HAProxy config must be wrong, because it's the only point where it fails. By the way, I tried to adapt my configuration to this one in this blog. Thanks.

    Read the article

  • James - mail server configuration help needed

    - by Chaitanya
    Hi, I am trying to setup James mail server on a linux machine. The linux machine has public static ip address assigned. I installed James and added in the config.xml added the servername as mydomain.com. In the DNS for mydomain.com, I have created a A-record, say mx.mydomain.com, which corresponds to the ipaddress of the above mail server machine. Then added mx.mydomain.com as MX record for mydomain.com. In James, I have created a new user test. Then from gmail, I sent a mail to [email protected]. The mail is not received back and it didn't even bounce back. The linux machine is behind a firewall with only 22, 80, 8080 ports open for external network. My question here is, Do I require do open any other ports on the firewall so that the mail I send from gmail arrives to James? If it's not the port problem, any views on solving this issue? I don't want to send mails from this server. It's only for receiving the mails.

    Read the article

  • Sending microphone input over Remote Desktop 7.0

    - by Taylor Price
    I am using Remote Desktop 7 (the new version that came out with Windows 7) to control a Windows XP Pro machine. I have selected "Record from this computer" in the Remote Audio settings. When I connect to the machine, go to the control panel, open the sound panel, and go to the audio tab, I find that the default sound playback device is "Microsoft RDP Audio Driver". However, there is no default sound recording device. As expected, my IP phone thinks there is no recording device. If I am sitting in front of the computer with a mic plugged in, it works just fine. Has anybody else been able to get this work appropriately? Is there anything that I have to setup on the XP machine to get this working? Thanks in advance. Edit: As John T pointed out below, you have to be connecting to a Windows 7 Enterprise or Ultimate machine for this to work. I've also found out that Multi-monitor support has the same requirement.

    Read the article

  • Disable local delivery in Sendmail

    - by Luke P M
    I am using Sendmail on a Centos server to send email for PHP scripts, but the problem is that mail is delivered to a local mailbox on the machine rather than what is specified in the MX records for the domain - which actually point to another machine I use for email. I would like sendmail to not try and locally deliver mail for the domain the machine is setup for, is there a simple way to disable local delivery? The domain is not in the local-host-names file. I've already done lots of googling and I have looked at: http://serverfault.com/questions/26934/sendmail-configuration-to-not-deliver-mail-to-local-machine http://serverfault.com/questions/65365/disable-local-delivery-in-sendmail But either there is no answer or it is not suitable. I don't want to relay to another server, i just want it to send mail regardless of domain. To provide an example: I have two servers, one is the mail server at mail.example.com and a web server which is example.com, when I use the smtp service on the web server it currently routes mail to a local mailbox on example.com, but it should be going to mailboxes on mail.example.com Output of sendmail -bt returns: ADDRESS TEST MODE (ruleset 3 NOT automatically invoked) Enter 3,0 [email protected] canonify input: info @ example . com Canonify2 input: info Canonify2 returns: info canonify returns: info parse input: info Parse0 input: info Parse0 returns: info ParseLocal input: info ParseLocal returns: info Parse1 input: info Parse1 returns: $# local $: info parse returns: $# local $: info

    Read the article

  • Appcrash and possible malware

    - by Chris Lively
    First off, I'm running MS Intune Endpoint Protection. It is completely up to date. On 10/25 @ 11:53PM I came across a site that caused Intune to freak out: Microsoft Antimalware has detected malware or other potentially unwanted software. For more information please see the following: http://go.microsoft.com/fwlink/?linkid=37020&name=Trojan:Win64/Sirefef.B&threatid=2147646729 Name: Trojan:Win64/Sirefef.B ID: 2147646729 Severity: Severe Category: Trojan Path: file:_C:\Windows\System32\consrv.dll Detection Origin: Local machine Detection Type: Concrete Detection Source: Real-Time Protection User: NT AUTHORITY\SYSTEM Process Name: C:\Windows\explorer.exe Signature Version: AV: 1.115.526.0, AS: 1.115.526.0, NIS: 10.7.0.0 Engine Version: AM: 1.1.7801.0, NIS: 2.0.7707.0 I, of course, elected to simply delete the file. Since then my machine has been randomly giving an error about "Host Process for Windows Services" stopped working. There are generally two different pieces of info: Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: BEX64 Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: StackHash_52d4 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 000062bdabe00000 Exception Code: c0000005 Exception Data: 0000000000000008 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 52d4 Additional Information 2: 52d47b8b925663f9d6437d7892cdf21b Additional Information 3: ed24 Additional Information 4: ed24528f3b69e8539b5c5c2158896d3e and Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: APPCRASH Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: mshtml.dll Fault Module Version: 9.0.8112.16437 Fault Module Timestamp: 4e5f1784 Exception Code: c0000005 Exception Offset: 00000000002ed3c2 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 3e9e Additional Information 2: 3e9e8b83f6a5f2a25451516023078a83 Additional Information 3: 432a Additional Information 4: 432a0284c502cce3bbb92a3bd555fe65 Intune claims the machine is clean. I've also tried some of the online scanners like trendmicro, all of which claimed the system is clean. Finally, I tried the "sfc /scannow" and it said all was good. I left my machine on after I left last night and there were about 50 of those messages. Ideas on how to proceed?

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

    - by user101012
    I'm running Open Nebula with 1 Cluster Controller and 3 Nodes. I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes. However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly. The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range. The Template file I used for the vmnet is: NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1 However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running. It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • Cannot Change "Log on through Terminal Services" in Local Security Policy XP from Server 2008 GP

    - by Campo
    This is a mixed AD environment, Server 2003 R2 and 2008 R2 I have a 2003 AD R2 and a 2008 R2 AD. GPO is usually managed from the 2008 R2 machine. I have a RD Gateway on another server as well. I setup the CAP and RAP to allow a normal user to log on to the departments workstation. I also adjusted the GPO for that OU to allow Log on trhough Remote Desktop Gateway for the user group. This worked on my windows 7 workstation. But unfortunately the policy is a different name in XP "allow log on through Terminal Services" I can get through right into the machine but when the log on actually happens to the local machine i get the "Cannot log on interactively" error. This is set in (for the local machine) Secpol.msc Local Security Policy "user rights assignment" but is controlled by the GPO in Computer Configuration Policies Security Settings Local Policies "User Rights Assignment" Do I simply need to adjust the same setting on the same GPO but with a server 2003 GP editor? Feel like that could cause issues... Looking for some direction. Or if anyone has run into this issue yet. UPDATE Should this work? support.microsoft.com/kb/186529 Still seems like I will have the issue as the actual GP settings for Log on through Terminal Services is still different between Server 2008 R2 and 2003 R2.... Another Thought: Should I delete the GPO made for the department and remake it with the 2003 R2 server? I have no 2008 specific settings as the whole department runs XP other than myself. If that's a solution I will move my computer out of the department as a solution... Thoughts?

    Read the article

  • "chown mysql:mysql /data/tmp" command

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. If there is a MySQL installation on a Ubuntu machine, I saw some people doing the following thing: sudo chown mysql:mysql /data/tmp I get confused, I know the meaning of the above command, which is to change the owner of /data/tmp to user 'mysql' and change the group of it to 'mysql' group. But (my questions): 1. Why would one run the above command? If I create a table in my_db database, by default, there will be .frm, .MYD, and .MYI files (data files) be created automatically by MySQL under /var/lib/mysql/my_db/ . So, does the above command changes the default MySQL data directory to /data/tmp/ instead of /var/lib/mysql/my_db/? Basically, I would like to know the purpose and effect of the above command. (better with examples) 2. Where does the 'mysql' owner and group come from? Does the installation of MySQL on a Linux machine automatically create the 'mysql' user and group? or People need to manually create a mysql account for the linux machine?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • MySQL socket connections working, but not port connections

    - by Neil
    I installed MySQL community 5.1.45 on my Snow Leopard 10.6, using the pkg from their site. I had previously installed a MySQL binary from entropy.ch. In the previous installation, the connections were working fine before I upgrade to Snow Leopard. In Snow Leopard, both the installations are problematic. Using an app called Sequel Pro, if I connect with the socket operation, it connects properly. However, a standard connection with the same credentials doesn't work. From what I've understood, socket connections happen on the machine itself between processes, whereas normal connections occur over the network/ports, in this case a loopback to my machine, since the server and client are both on the same machine. My new CakePHP installation isn't being able to connect to the db with the root credentials I provided. Btw, I've been starting the MySQL server using the Preference Pane. When I tried running mysqld from terminal, it gave me: 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test mysqld: Can't change dir to '/usr/local/mysql-5.1.45-osx10.6-x86_64/data/' (Errcode: 13) 100323 1:54:37 [ERROR] Aborting 100323 1:54:37 [Note] mysqld: Shutdown complete mbp is the name of my machine. How do I fix this so that my webserver can connect to the mysql server?

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • When I log on to my company desktop, I log on to a domain. How is this domain name installed?

    - by learnerforever
    Hi, When I have to work on my machine in company, I have noticed that I log on to a domain (named on the basis of company name) and not really on that computer. From, what I understand, this has a few advantages, the primary being that I just need one password for the domain and can work through any of the machines in company. My questions are : What software on desktop/network have to be installed so that the desktop recognizes and gives me option of logging into a domain. I would guess that a software can be installed on desktop, and there we can configure the IP address of domain server of company and port number, which handles authentication. Is this correct? This takes me to another question that how are softwares installed on end machines in a company. Going to each machine physically and installing looks very unweildy from administrator point of view. An obvious solution would be to install softwares (and updates) over network. My question on this are: What protocols,keywords come into picture when administrator installs OS,softwares,updates from his administrator machine to end machine through network. Thanks,

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Windows Terminal Server: occasional memory violation for applications

    - by syneticon-dj
    On a virtualized (ESXi 4.1) Windows Server 2008 SP2 32-bit machine which is used as a terminal server, I occasionally (approximately 1-3 event log entries a day) see applications fail with an 0xc0000005 error - apparently a memory access violation. The problem seems quite random and only badly reproducable - applications may run for hours, fail with 0xc0000005 and restart quite fine or just throw the access violation at startup and start flawlessly at the second attempt. The names of executables, modules and offset addresses vary, although a single executable tends to fail with same modules and the same memory offset addresses (like "OUTLOOK.EXE" repeatedly failing on module "olmapi32.dll" with the offset "0x00044b7a") - even across multiple user's logons and with several days passing without a single failure inbetween. The offset addresses seem to change across reboots, however. Only selective executables seem affected by the problem, although I may simply not be seeing a sufficient number of application runs from the other ones. I first suspected a possible problem with the physical machine's RAM, but ruled this out as a rather unlikely cause - the memory comes with ECC and I've already moved the virtual machine across several times, without any perceptable change. I've seen that DEP was enabled in "OptOut" mode on this machine: C:\Users\administrator>wmic OS Get DataExecutionPrevention_SupportPolicy DataExecutionPrevention_SupportPolicy 3 and tried changing the policy to OptIn via startup options: bcdedit.exe /set {current} nx OptIn but have yet to see any effect - I also would expect Outlook 12 or Adobe Reader 9 (both affected applications) to play well with DEP. Any other ideas why the apps may be failing?

    Read the article

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • SQL Server on Linux

    - by TimothyAWiseman
    For a particular project that is coming up, I am trying to expand my knowledge of Linux, so I am going to set up a Linux system at home. Rather than dual booting, I am thinking about putting SQL Server on a Windows Virtual Machine with Linux as the host at least until this project is over when I will probably switch back to Linux. So, I have a couple of different, but interrelated questions: How well does this work? This is only a test machine at home, so I can easily accept a fair bit of degradation, but if it is going to be a horrible reduction in performance I will dual boot instead. Is there a particular virtual machine manager I should look at to go this route? Since this is my personal machine, price is an issue but I am quite happy to pay a reasonable amount. And finally, given the choice of VMM, is there a particular Linux Distro I should be looking at? [This has been cross posted at Ask.SqlServerCentral.com . I think it may be appropriate at both sites. ]

    Read the article

  • is it worth to use load balancer on web server/website

    - by user427969
    I have a website and a while ago, the web server of the company hosting my website was down for about a day. I consulted the company for a solution on how i can stop this from happening in future and they suggested to have a second machine and which will be connected to my current website/web server by a "load balancer" (at an additional huge cost!!!). The second machine will be replicate of the first one and so if i goes down, the other will always be running. ---- Explanation ----- My hosting company suggested that it will be a good idea to have a second machine running at the same time and both the machines will be connected by a load balancer which reduces the rist of a downtime. The second machine will be a mirror of the first and any changes to first must be replicated in the second. I don't mind spending money if it really saves my website from going down. I want to know is it worth having this "load balancer" for my purpose? My website is a 24/7 service. I cannot afford an outage of 24 hours/1 hour. I don't mind using this "load balancer" as far as it is really worth. I am not sure if its just a marketing trick of my hosting company or really a "best" solution Thanks for help. Regards

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • There's a stray current flowing from my monitor through the VGA cable to the PC. Is this safe?

    - by EApubs
    I have two monitors in my machine. One is an old LCS samsung monitor. Recently, I started to hear a small hum in my speakers (subwoofer) and replaced them. The new one also got the issues then I found our that its a grounding issue. I unplugged the PC's power chord. The monitor is still switched on. When I checked, there's current in the earth pin (ground pin). When I unplug the monitor, there's no current and the speaker is normal. Now, I have moved that monitor to my dad's machine and took his monitor. My question is, is it a big issues? The house's earthing system is working and its grounding the current. I won't feel it if I touch the machine like in many other cases. But still, is it good to keep that monitor attached to my machine? Can it harm the computer? What should I do?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >