Search Results

Search found 22756 results on 911 pages for 'cisco vpn client'.

Page 361/911 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • How do I enable mutual SSL in IIS7 with a self-signed certificate?

    - by Kant
    I've created a self-signed certificate in IIS7. Then I exported this certificate to a .pfx and then installed it on the client machine's IE browser. Then I set "Require Client Certificate" on the server's IIS configuration. When I try to visit the site with IE, a dialog box comes up for me to choose a certificate, however, there are no certs in that dialog box. When I click "OK" without choosing any certs, I get a 403 forbidden error. How can I make this work? Appreciate the help in advance.

    Read the article

  • Hidden DNS master only sending notify to one slave

    - by Rob
    My hidden DNS master is only sending notifies to one of the name servers for a zone I have 3 named servers ns0,ns1 & ns2 all running bind 9.7.3.dfsg-1ubuntu4.1. When an update is processed the master (ns0) seems to behave normally. ns0 (192.168.2.50) zone domain.org/IN: sending notifies (serial 2012060703) client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR started: TSIG rndc-key client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR ended ns2 (192.168.2.52) client 192.168.2.50#3762: received notify for zone 'domain.org': TSIG 'rndc-key' zone domain.org/IN: Transfer started. transfer of 'domain.org/IN' from 192.168.2.50#53: connected using 192.168.2.52#55747 zone domain.org/IN: transferred serial 2012060704: TSIG 'rndc-key' transfer of 'domain.org/IN' from 192.168.2.50#53: Transfer completed: 1 messages, 34 records, 1028 bytes, 0.001 secs (1028000 bytes/sec) Nothing happens on ns1. I've turned up the logging level but there's no information in syslog about the actual name servers bind has sent notifications to so I guess this is something it doesn't log. I've also tried watching tcpdump, it never makes any attempt to notify ns1 only ns2 192.168.2.50.56278 > 192.168.2.52.53: [udp sum ok] 56418 notify [b2&3=0x2400] [1a] [1au] ? SOA? domain.org. domain.org. [0s] SOA ns1.domain.net. dnsmaster.domain.net. ? 2012060801 10800 3600 604800 3600 ar: rndc-key. ANY [0s] TSIG hmac-md5.sig-alg.reg.int. fudge=300 maclen=16 origid=56418 error=0 otherlen=0 (174) the authoritive zone has both ns1 and ns2 records $ORIGIN domain.org. $TTL 3h @ IN SOA ns1.domain.net. dnsmaster.domain.net. ( 2012060801 ; Serial yyyymmddnn 3h ; Refresh After 3 hours 1h ; Retry Retry after 1 hour 1w ; Expire after 1 week 1h ) ; Minimum negative caching of 1 hour @ 3600 IN NS ns1.domain.net. @ 3600 IN NS ns2.domain.net. // Edit I have added also-notify {192.168.2.51;192.168.2.52;}; explicitly to the zone file and it all works fine, both ns1 and ns2 get notify messages and transfers succeed. I was under the impression bind would automatically send notifies to all NS records on a zone, maybe it's bugged?

    Read the article

  • Kickstart installation: Unable to read package metadata.

    - by yacov
    I'm trying to install a CentOS OS with kickstart using HTTP as the installation source. The kickstart server and the installed server are both running on VMs on the same machine. after the anaconda system installer starts it fails with the following message: I tried installing two different versions of Centos(5.5 and 5.2), and they both pass a CDROM media test the manual installation provides. The only errors on the kickstart server side are some errors in the httpd log I consider irrelevant: [Sat Mar 12 23:25:19 2011] [error] [client 192.168.1.112] File does not exist: /tftpboot/linux-install/platforms/CentOS5.5/images/product.img [Sat Mar 12 23:25:19 2011] [error] [client 192.168.1.112] File does not exist: /tftpboot/linux-install/platforms/CentOS5.5/disc1 I tried searching the internet for days and haven't found any solution... Does anyone have any idea?

    Read the article

  • Messages fail to deliver when sending out a mass email

    - by Jason T.
    A client of mine sent out a mass email to a bunch of people on his contact list. A bounceback email was received by my client stating that: "The server has tried to deliver this message, without success, and has stopped trying. Please try sending this message again. If the problem continues, contact your helpdesk." That error is associated with every recipient in that mass email. Any ideas on where to look to resolve this? He is able to send and recieve other emails normally.

    Read the article

  • Problems serving SVN over HTTPS on Ubuntu 10.04

    - by odd parity
    We've been experiencing some problems with our Subversion server after upgrading to Ubuntu 10.04. When trying to access a repository, regardless of client (I've tried git-svn and svn on Windows as well as svn on Ubuntu 10.04, from different computers and network locations), I get a 400 bad request. Here's the output from svn: svn: Server sent unexpected return value (400 Bad Request) in response to OPTIONS request for 'https://svn.example.org/svn/programs' Here are the relevant entries from the Apache logs (I'm running Apache 2.2): error.log [Mon Jun 14 11:29:31 2010] [error] [client x.x.x.x] request failed: error reading the headers ssl_access.log x.x.x.x - - [14/Jun/2010:11:29:28 +0200] "OPTIONS /svn/programs HTTP/1.1" 401 2643 "-" "SVN/1.6.6 (r40053) neon/0.29.0" x.x.x.x - - [14/Jun/2010:11:29:31 +0200] "ction-set/></D:options>OPTIONS /svn/programs HTTP/1.1" 400 644 "-" "SVN/1.6.6 (r40053) neon/0.29.0" If anyone has run into similar problems or could give me a pointer to track down the cause of this I'd be very grateful - I'd really like to avoid having to downgrade the box again.

    Read the article

  • LDAP + LTSP 12.04

    - by us3r
    On ubuntu 12.04 i have some kind of problem with LTSP and LDAP. Sometimes I can log to the server, but sometimes I cant (window freezes on LDM) from thin client. Everything is ok when I log to the server like the local machine, but I have some kind of problem on thin client. pam_mkhomedir.so creates home dir, but i cant log..because Nothing happened - ldm freezes. This problem doesnt exist for "local" users (unix accounts) and on first logged LDAP user. It's important to mention that in log I can see nothing special. Does anybody have a problem with ltsp + ldap on ubuntu 12.04? There wasn't any problem on the previous versions. ps sorry for my english skills ;) EDIT: When LDM freezes in the logs there is something: May 17 11:59:52 bar sshd[6066]: Accepted password for student2 from 192.168.100.22 port 44000 ssh2 May 17 11:59:52 bar sshd[6066]: pam_unix(sshd:session): session opened for user student2 by (uid=0) May 17 12:00:03 bar sshd[6315]: subsystem request for sftp by user student2 And nothing other for this user.

    Read the article

  • JMX Monitoring of GlassFish Servers

    - by tjquinn
    Did you ever wonder what this message in your GlassFish server.log file means? JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://192.168.2.102:8686/jndi/rmi://192.168.2.102:8686/jmxrmi It means you can monitor any GlassFish server process, remotely or locally, using any standard Java Management Extensions (JMX) client.  Examples: jconsole or jvisualvm.   Copy the part of the log message that starts with "service:" into the Add JMX Connection dialog of jvisualvm:  or into the New Connection dialog of jconsole: (The full string is truncated in the on-screen display, but if you copied from the server.log and pasted into the form it should all be there.) The examples above are for a DAS, and your host will probably be different.   The server.log files for other GlassFish servers (instances) will have similar log entries giving the JMX connection string to use for those processes.  Look for the host and/or port to be different. Note a few things about security: Here we've assumed you are using the default admin username and password.  If you are not, just enter a valid admin username and password for your installation.  Once connected, you have normal access to all the JVM statistics and controls. You can use JMX clients that support MBeans to view the GlassFish configuration.  When you connect to the DAS, you can also change that configuration, but you can only view configuration when you connect to an instance. To use a JMX client on one system to connect to a GlassFish server running on another system, you need to enable secure admin if you have not already done so: asadmin change-admin-password (respond to the prompts) asadmin enable-secure-admin asadmin restart-domain (as prompted in the output from enable-secure-admin)

    Read the article

  • Running a bash script from an HTML link or button

    - by Andrew
    I have a webserver that's hosting lots of images. I want the client to be able to press a button or a link, which will run a bash script, which will create a video based on all these pictures. The script I'm trying to run is this: #!/bin/bash # cd to the directory cd /var/www/gallery # use ffmpeg to make video ffmpeg -pattern_type glob -i 'img-*jpg' -r 1 video.mp4 # Take the first file in the directory and name it video.mp4.jpg (for thumbnail) cp `ls | sort -n | head -1` video.mp4.jpg The script is located on the server. So when the client clicks the link or button, the script will run, and the video is created. I've tried both solutions listed here but I can't seem to get it to work. I have php installed on my server.

    Read the article

  • How to block own rpcap traffic where tshark is running?

    - by Pankaj Goyal
    Platform :- Fedora 13 32-bit machine RemoteMachine$ ./rpcapd -n ClientMachine$ tshark -w "filename" -i "any interface name" As soon as capture starts without any capture filter, thousands of packets get captured. Rpcapd binds to 2002 port by default and while establishing the connection it sends a randomly chosen port number to the client for further communication. Both client and server machines exchange tcp packets through randomly chosen ports. So, I cannot even specify the capture filter to block this rpcap related tcp traffic. Wireshark & tshark for Windows have an option "Do not capture own Rpcap Traffic" in Remote Settings in Edit Interface Dialog box. But there is no such option in tshark for linux. It will be also better if anyone can tell me how wireshark blocks rpcap traffic....

    Read the article

  • Where does jQuery fit-in with frameworks like JavaScriptMVC, BackboneJS, SproutCore and Knockout?

    - by Prisoner ZERO
    I have been happily using JQuery for the last 2 years and have been quite sucessful creating some really cool functionality with it...so I am very comfortable with it. I also beleive the future of the web will continue on the current client-side path. However... The next challenge seems to be coming in the form of various controller frameworks: KnockoutJS, BackboneJS, SproutCore, JavaScriptMVC (the list goes on). Additonally, there are some great AMD Loader tools for use like RequireJS or LabJS etc. However, jQuery now has define and then capabilities baked-in. It's getting harder-and-harder to keep track of it all... And now, my task seems to be to evaluate/decide-on a strategic-direction for using some form of either an MVC or MVVM framework client-side...but I have so many questions. Where does JQuery fit-in with the various controller-frameworks mentioned above? Is JQuery used alongside each or do some of them have their own 'JQuery-styled version' baked-in? Are tools like RequireJS still needed if you implement one of the various controller-frameworks mentioned above? Does the define and then capabilities baked-into JQuery now supercede the AMD Loader mentioned above? Which one seems most modular? (see notes below) NOTES: One thing I don't want in any future-framework is the requirement of having to take-in vast amounts of functionality that I don't use. Meaning, I would rather use a framework that is truly modular. For example, to use jQuery UI you have to take-in a lot other core libraries that you might not actually use. I will be experimenting with each one, but some REAL feedback would be great. I've seen some 'similar' questions, but none have really answered the above skew. Thanks in advance!

    Read the article

  • Why use Google Apps Sync for Outlook to sync email?

    - by Howiecamp
    I currently use Outlook 2007 against an Exchange server for my email and will be moving to Google Apps. There are a number of ways to import your existing email and calendar entries into Google Apps Gmail (e.g. including the Google Apps Sync for Outlook tool), the Google Email Uploader, and copying messages using an IMAP client) so I'm covered on the import side. I'm trying to understand the use cases for the Google Apps Sync for Outlook tool http://mail.google.com/support/bin/topic.py?topic=23333 with respect to email and calendar entries. The description says it syncs your Outlook email and calendar items with Google Apps, but doesn't using Outlook as an IMAP client against Google Apps do the same?

    Read the article

  • Why use FQDN as DNS-server option in DHCP?

    - by Filip Haglund
    I've seen multiple default configurations of DHCP-servers with a FQDN set as the DNS-server option. Doesn't this imply a catch-22, or the need for that DNS-server to be in the hosts file of every single client? example from dhcp3-server in debian 6: option domain-name-servers ns1.internal.example.org; I can see how using a dns name is convenient because it's only an A-record to change, and they can be load balanced if wanted, but I don't see how the client is going to resolve the name. Why are people using FQDN's as DNS-server addresses in DHCP?

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • Streaming media from linux server - low footprint is crucial

    - by Mike Haye
    I recently pre-ordered the Raspberry Pi. http://www.raspberrypi.org/faqs For those of you who don't know it, it's a machine with 256 mb ram and a 700 MHz processor for $35. I plan to run linux on an SD card on this machine and have it act as both a htpc, VPN and media server. In regard to the media server part, I need to find some linux software that has a small footprint, but allows me to stream media to other devices connected to the internet (preferably without having to install any additional software on the client machines) Also, I would love if the video could be compressed, so the data usage wouldn't be so big for the client machine (e.g. when I'm using my data plan on my smartphone ;) ) Thanks in advance for any answers :) Mike.

    Read the article

  • Connecting to an Amazon AWS database [closed]

    - by Adel
    so I'm a bit overwhelmed/bewildered by the whole concept of networking/remote-desktop , etc. The context is that - in my company I need to access a remote database. The standard way I use is to first connect using a VPN-Client( called Shrew Soft Access manager), then once that says: "network device configured tunnel enabled" I'm good to connect using windows "Remote Desktop Connection" . But now our company set up an Amazon AWS database, and I'm told I need to connect, and I ony need to use RDP. So I tried the standard windows one - but it doesn't work. On wikipedia , I looked up remote desktop sftware and downloaded one called VNC Viewer. but it doesn't work. Any advice/tips/comments appreciated EDIT: YAYA! I finally got a little more connected . I had to use my username as a fully qualified name: Computer: XYZ.XYZ.XYZ.XYZ USERNAME: XYZ.XYZ.XYZ.XYZ\aazzam

    Read the article

  • weird postgresql log entries

    - by hyperboreean
    I am trying to figure out why I get some weird entries in my postgresql log after I do a restart: 2010-05-14 11:30:25 EEST LOG: database system was shut down at 2010-05-14 11:30:22 EEST 2010-05-14 11:30:25 EEST LOG: autovacuum launcher started 2010-05-14 11:30:25 EEST LOG: database system is ready to accept connections 2010-05-14 11:30:25 EEST LOG: incomplete startup packet 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress 2010-05-14 11:30:40 EEST LOG: could not receive data from client: Connection reset by peer 2010-05-14 11:30:40 EEST LOG: unexpected EOF on client connection First, there's the 2010-05-14 11:30:25 EEST LOG: incomplete startup packet which bugs me. Anyone has any idea why this happens? And also, this one is very strange: 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress ...

    Read the article

  • FTP - 530 Sorry, the maximum number of clients...?

    - by aSeptik
    Hi All! i know this is not a properly code question, but who of you don't use an FTP client!? ;-) Ok my problem is that my FTP work great, exept when i upload files on a particular client server! on this server happen that some files are uploaded fine and others not, they stop while uploading at half of it's size, then this error is displayed: 530 Sorry, the maximum number of clients (4) from your host are already connected. Unable to make a connection. Please try again. Obviously this is not true, i'm the only one that is uploading! Anyone had the same experience with this!? PS: i have tried many different FTP, all display the same error or just hung up! Thank's

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Few GUI problems with minimal install

    - by Toki Tahmid
    I installed a minimal Ubuntu with a complete functional GUI, but facing a few problems. nm-applet's icon won't show in the notification area, but I can connect to wired internet fine. I am not able to configure my wireless or VPN this way. gksu's authentication screen is different from the usual graphical authentication - the screen turns gray as usual, but there are more options like save password for this session or keyring. And most importantly, it won't accept my password no matter what. And lastly, Gwibber seems to install no matter what, but there's not a single package in my knowledge that I installed has anything related to Gwibber. I would welcome any help regarding these three issues. I did not mention what packages I installed, because the list is long, but I will do so if anyone requests. Thank you in advance!

    Read the article

  • Credentials work for SSMS but not (ODBC) LogParser script

    - by justSteve
    Via SSMS I'm able to connect and navigate the server/db in question. but trying to connect via a logparser script the same credentials fail. I'm trying to execute this from the same box on which the server's running. the username is owner/dbo of the db. The db has mixed mode authentication. [linebreaks for clarity] C:\TTS\tools\LogParserc:\tts\tools\logparser\logparser file:c:\tts\tools\logparser\errors2SQL.sql?source="C:\inetpub\logs\LogFiles\W3SVC8\u_ex100521.log" -i:IISW3C -o:SQL -createTable:ON -oConnString:"Driver={SQL Server Native Client 10.0};Server=servername\SQLEXPRESS;db=Tter;uid=logger2;pwd=foo" -stats:OFF Task aborted. Error connecting to ODBC Server SQL State: 28000 Native Error: 18456 Error Message: [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'logger2'. C:\TTS\tools\LogParser

    Read the article

  • Nomachine 4 for X forwarding

    - by Yair
    I have been using nomachine nx client to connect from my mac to an ubuntu server for a while now and it has been a great experience. The most useful feature for me was the option to open up just one application on the remote machine, instead of a full remote desktop connection. I used to to open a terminal on the remote machine. Basically it was a much faster, much better replacement for ssh -X. All was great until I upgraded to the new version - nomachine 4. In this version I can not find that option. I have to run a full remote desktop session, which slows things down and is also much less convenient for my work. Was this option removed from the client? Or is it hiding somewhere in there and I just can't find it?

    Read the article

  • apc.stat causes 500 internal server error

    - by Legit
    When I turn off apc.stat it causes a 500 internal server error. I checked the apache error_log and it's something about: [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Warning: require(): Filename cannot be empty in /var/www/site1/public/index.php on line 17 [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/site1/public/index.php on line 17 I checked that line and here's what it contains: require('./wp-blog-header.php'); I don't see anything wrong with it. Here's my current APC config: APC version: 3.1.10 PHP Version: 5.4.4 How do I resolve this error when i disable apc.stat?

    Read the article

  • What kind of website or coding is suitable and safe for an artist's website

    - by Dan S
    I have a web design project that is related to a singer, and I used Joomla for my previous project and designed good music websites. But for this project I cannot find a suitable template to edit and use. As the website is so simple and does not have any special functionality, I'm thinking about creating a website with just simple CSS, html and jQuery. I'm Good at them and can make a perfect look but I am not sure about the security. In Joomla I use different security plugins but do not know about a client-side scripting. So generally I need your ideas, about the following questions: - Is Joomla and generally CMS a good option for a music website? - How famous artists' website is base on? CMS or Client-side scripting? - Do you recommend to create it manually without using and CMS or template? - An do you suggest WordPress for this type of websites? (The website will have these pages: Biography, News, Music (with a music player), Photos, videos and contacts). That's it! Thank you for all your responds, I had a look at Joomla and the only template I chose is This One which seems very simple, and I am worry about module position, because it seems does not have any module position at all. I tried to contact the provider but did not get any respond. Does anyone know about its module position, I mean is there any way to find them? An is it possible to create a 2-3 module positions? Also I had a look at ThemeForest's WordPress templates and it has such a great template. I think WordPress is more active in creating artistic templates. But is it secure and professional to use this CMS for a singer who is kinda famous it his country? I am talking about a template like this. Share your opinions guys.

    Read the article

  • Run script when POST data is sent to Apache

    - by Nathan Adams
    Among my several years of running servers there seems to be a pattern with most spam activity. My question/idea is that is there a way to tell Apache to run a script when POST data is detected? What I would want to do is perform a reverse DNS lookup on the client's IP address, and then perform a DNS lookup on the hostname in the PTR record. Afterwards, perform some checks, excuse the pseudo-code: if PTR does not exist: deny POST request if IP of PTR hostname = client's IP Allow POST request else deny POST request Though I don't care about GET requests, even though they can be just as malicious, this idea is targeted towards spam comments which use POST data to send the comment data to the web server. In order to make sure there isn't much of a time delay, I would run my own recursive DNS server. Please do note, this isn't meant to be a sliver bullet to spam, but it should decrease the volume. Possible or impossible?

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >