Search Results

Search found 20426 results on 818 pages for 'service packs'.

Page 705/818 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • How to create a init.d script for openssh-server which was compiled and installed from source using configure + make + make install?

    - by Patrick L
    I have installed openssh-server in my Ubuntu PC using apt-get install openssh-server. The version is 5.9. Now, I would like to compile and install openssh-server version 6.2 from source codes. I have successfully downloaded the source codes, and run the following commands: ./configure make make install I found that the new version of openssh-server was installed into /usr/local/sbin/. The old version of openssh-server is in /usr/sbin/. I found that the service script in /etc/init.d/ssh is still pointing to /usr/sbin/. And the old openssh-server (v5.9) is still running. How can I replace the old openssh-server with the new openssh-server that I have just compiled and installed? How can I create a init.d script to start and stop the new openssh-server that I've compiled from source manually? How to start the new openssh-server on boot? When I install openssh-server using apt-get install, the config files will be installed into /etc/ssh/. If I compile and install it from source, where is the config file? If I compiled openssh-server from source, but I install openssh-client package using apt-get install, will there be any config files conflict? Thanks.

    Read the article

  • PTR and A record must match?

    - by somecallmemike
    RFC 1912 Section 2.1 states the following: Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. This does not make any sense to me, should an ISP keep matching A records for every PTR record? It seems to me that it's only important if the IP address that the PTR record describes is hosting a service that is sensitive to DNS being mismatched (such as email hosting). In that case the forward zone would be configured under a domain name (examples follow the format 'zone - record'): domain.tld -> mail IN A 1.2.3.4 And the PTR record would be configured to match: 3.2.1.in-addr.arpa -> 4 IN PTR mail.domain.tld. Would there be any reason for the ISP to host a forward lookup for an IP address on their network like this?: ispdomain.tld -> broadband-ip-1 IN A 1.2.3.4

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • My Reverse DNS PTR record seems to be right, but I'm still getting bouncing email

    - by johnbr
    Hello, I have a service (statusme.com) where I let people know (for example) that their kid's soccer games are cancelled because of bad weather. We send out emails to the people who have registered. I have a second server as a backup, (vps.statusme.com) and I've set up the application to send some of the email through the second server. But I'm getting complaints from various recipient SMTP servers that the email is considered spam. So I did some investigating, and it appears that they think my reverse DNS record isn't correct. But when I look at it via various rDNS websites and instructions I found elsewhere on ServerFault, everything looks correct: jb$ host -t a vps.statusme.com 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: vps.statusme.com has address 66.84.8.246 jb$ host -t ptr 246.8.84.66.in-addr.arpa 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: 246.8.84.66.in-addr.arpa domain name pointer vps.statusme.com. I'm confused about what I'm doing wrong. Thanks for any suggestions.

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • libvirt's dnsmasq does not respond to dns queries or provide dhcp

    - by Jeremy
    This is on Ubuntu 10.04 server, using KVM to run Ubuntu guests. This system has been working for a long time and I have not changed anything (other than applying security updates), but today I found dnsmasq no longer responds to requests. I cannot say how long this has been broken for me because I don't frequently use the NAT'd guests. So it could have started just after the last updates or some other event and I just now found it. I can connect to port 53 with telnet at 192.168.122.1. I've flushed ip-tables to be sure it wasn't firewall rules and that is not the problem. dnsmasq is running, virsh reports default network as stared. I can't find ANY information on troubleshooting libvirt dnsmasq except that it won't play well with other instances of dnsmasq, which is not the problem. I cannot even find where log entries might be for this service. Any ideas on where to look for more information? edit to add: I added another network and that one works fine. I guess I have a workaround but would still like to figure out how to troubleshoot this problem.

    Read the article

  • How to record tv to network share with Windows Media Center?

    - by Peterdk
    Well, you would think that Windows 7's new MediaCenter would be up to the task of recording your TV to a network share/drive. Too bad, it looks like it's just not possible. I have a windows 2008 R2 server, and a Windows 7 machine with a TV card. Since my server has 2TB of storage, it would be nice to record directly to it's networked drive. (I mounted it as Z:). I tried the following: Selecting it in Media Center Itself: Not working. Not available. Editing the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Media Center\Service\Recording , setting RecordPath to Z:\TV. Not working. Editing the registry: setting RecordPath to \\server\TV. Not working. Creating a Symlink (mklink \D) to Z:\TV and \\server\TV and setting that in the registry as RecordPath. Currently I am out of options. I could ofcourse Install Windows7 on my server, but I have no license for that, and my windows 2008 r2 is free from dreamspark. Are there people that are succesfully recording to a networked drive/storage? edit I also need to mention that I need to be able to acces the stored files from other PC's, like my laptop. So iSCSI is great for recording, but it looks like you can't access iSCSI devices from multiple PC's. Looks like sharing a iSCSI device is out of the question, so: Are there workarounds to get this thing recording to my network drive?

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Should I expect ICMP transit traffic to show up when using debug ip packet with a mask on a Cisco IOS router?

    - by David Bullock
    So I am trying to trace an ICMP conversation between 192.168.100.230/32 an EZVPN interface (Virtual-Access 3) and 192.168.100.20 on BVI4. # sh ip access-lists 199 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 # sh debug Generic IP: IP packet debugging is on for access list 199 # sh ip route | incl 192.168.100 192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.100.0/24 is directly connected, BVI4 S 192.168.100.230/32 [1/0] via x.x.x.x, Virtual-Access3 # sh log | inc Buff Buffer logging: level debugging, 2145 messages logged, xml disabled, Log Buffer (16384 bytes): OK, so from my EZVPN client with IP address 192.168.100.230, I ping 192.168.100.20. I know the packet reaches the router across the VPN tunnel, because: policy exists on zp vpn-to-in Zone-pair: vpn-to-in Service-policy inspect : acl-based-policy Class-map: desired-traffic (match-all) Match: access-group name my-acl Inspect Number of Half-open Sessions = 1 Half-open Sessions Session 84DB9D60 (192.168.100.230:8)=>(192.168.100.20:0) icmp SIS_OPENING Created 00:00:05, Last heard 00:00:00 ECHO request Bytes sent (initiator:responder) [64:0] Class-map: class-default (match-any) Match: any Drop 176 packets, 12961 bytes But I get no debug log, and the debugging ACL hasn't matched: # sh log | inc IP: # # sh ip access-lists 198 Extended IP access list 198 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 Am I going crazy, or should I not expect to see this debug log? Thanks!

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • windows 7 turns off itself everyday at about 9am

    - by Radek
    I was given this comp at work and after a week or so this strange thing started to happen in the middle of doing something :-( It turns off itself at about 9.10am every morning Just once a day and it works fine after it had it little nap. it happens both if the comp was on all night or I turned it off before leaving the office. I tried to swapped the memory as I was told that there was an issue with memory. But moving or removing any memory did not make any difference. I am not aware that I would install any program that could cause that. I installed AVG and set it up to do every day scan about 8am + if restart is needed it requires user confirmation. 'The Software Protection service has stopped' few minutes before it turned off itself but it happened also at other times without the computer turned off. I turned off Windows automatic updates as the first thing when I got the comp configuration Windows 7 Ultimate Intel Core2 Quad Q9550 at 2.8GH, 8GB RAM ST31000528AS Barracuda 7200.12 SATA 3Gb/s 1TB Update Below message are logged when I turn the comp on again. Message 1: The previous system shutdown at 9:08:54 AM on ?6/?29/?2010 was unexpected. Message 2: Level:Critical Source: Kernel Power EventID 41 Task Category (63) The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Log after manual restart Update 2 (task scheduler)

    Read the article

  • CentOS 6.3 Virtual under OpenVZ cannot ping, host lookups, outbound connections while postfix running

    - by Paul Cravey
    My best theory is that some kernel limit is being hit preventing outbound connections. We have tried basically everything from tcpdumps to provisioning an entirely new virtual server (we do not have this problem on any other virtuals), however the problem somehow carried over, even with new postfix build (working). Emails work, and outbound connections work, so long as postfix does not have too much going on. /proc/user_beancounters shows no limits being hit (show below). Nevertheless, pings fail even to IP addresses. TCP stack appears healthy. Load is low. No iowait. Flushed iptables already. Has anyone experienced anything like this? uid resource held maxheld barrier limit failcnt 3: kmemsize 166216365 170262528 9223372036854775807 9223372036854775807 0 lockedpages 0 0 9223372036854775807 9223372036854775807 0 privvmpages 285727 351885 9223372036854775807 9223372036854775807 0 shmpages 16933 17605 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 150 303 9223372036854775807 9223372036854775807 0 physpages 314156 326191 0 1280000 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 165355 165355 9223372036854775807 9223372036854775807 0 numtcpsock 89 172 9223372036854775807 9223372036854775807 0 numflock 22 76 9223372036854775807 9223372036854775807 0 numpty 1 2 9223372036854775807 9223372036854775807 0 numsiginfo 0 75 9223372036854775807 9223372036854775807 0 tcpsndbuf 2733472 4371752 9223372036854775807 9223372036854775807 0 tcprcvbuf 1798336 5427296 9223372036854775807 9223372036854775807 0 othersockbuf 491120 1000760 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 238728 9223372036854775807 9223372036854775807 0 numothersock 361 505 9223372036854775807 9223372036854775807 0 dcachesize 135941831 136114679 9223372036854775807 9223372036854775807 0 numfile 2905 4990 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 8 9 9223372036854775807 9223372036854775807 0 [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. --- 4.2.2.1 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8493ms [root@bni /]# service postfix stop [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. 64 bytes from 4.2.2.1: icmp_seq=1 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=2 ttl=53 time=8.62 ms 64 bytes from 4.2.2.1: icmp_seq=3 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=4 ttl=53 time=8.66 ms Outbound connections of all sorts fail when postfix is running.

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Cherrypy web application won't communicate outside localhost via VPN

    - by Geoffrey Shea
    I'm trying to run a Python2.7/Cherrypy web server on Win 7 which is connected to a VPN to establish a dedicate IP address. (If I run the exact same application on Win XP connected to the VPN it works fine.) On Win 7 I tried configuring it to use port 8080, 8005, or 80 with no improvements. I turned off Windows Firewall altogether to test and there was no improvement. If I run Apache on the Win 7 machine on port 80 it works fine so I'm pretty sure it's not the VPN service or router. If I go to WhatismyIP.com it shows that I have the IP address being provided by the VPN. Here is the Python code, but I suspect the problem is the network configuration: import cherrypy class HelloWorld: def index(self): return "Hello world!3" index.exposed = True cherrypy.root = HelloWorld() cherrypy.config.update({"global":{ "server.environment": "production", "server.socketPort": 8005 } }) cherrypy.server.start() This will return a web page if I go to localhost:8005, but not if I go to the VPN IP address:8005 from another machine. As I said, if I run Apache on the Win 7 machine on port 80 I can see it at localhost:80 AND at the VPN IP address:80 from another machine. Thanks for any light you can shed! Geoffrey

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >