Search Results

Search found 15896 results on 636 pages for 'gnome settings daemon'.

Page 116/636 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Saving an xml file without

    - by little
    Hi, How do I work with an xml file that when updating it, after saving the commented lines would still be present. Here's my code snippet for saving the file: public static void WriteSettings(Settings settings, string path) { XmlSerializer serializer = new XmlSerializer(typeof(Settings)); TextWriter writer = new StreamWriter(path); serializer.Serialize(writer, settings); writer.Close(); }

    Read the article

  • Secure C# Assemblies from unauthorized Callers

    - by Creepy Gnome
    Is there any way to secure your assembly down to the class/property & class/method level to prevent the using/calling of them from another assembly that isn't signed by our company? I would like to do this without any requirements on strong naming (like using StrongNameIdentityPermission) and stick with how an assembly is signed. I really do not want to resort to using the InternalsVisibleTo attribute as that is not maintainable in a ever changing software ecosystem. For example: Scenario One Foo.dll is signed by my company and Bar.dll is not signed at all. Foo has Class A Bar has Class B Class A has public method GetSomething() Class B tries to call Foo.A.GetSomething() and is rejected Rejected can be an exception or being ignored in someway Scenario Two Foo.dll is signed by my company and Moo.dll is also signed by my company. Foo has Class A Moo has Class C Class A has public method GetSomething() Class C tries to call Foo.A.GetSomething() and is not rejected

    Read the article

  • ASP.NET MVC Generic Partial

    - by gnome
    Is is possible to have a partial view inherit more than one model? I have three models (Contacts, Clients, Vendors) that all have address information (Address). In the interest of being DRY I pulled the address info into it's own model, Addresses. I created a partial create / update view of addresses and what to render this in other other three model's create / update views.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to Use Images as Navigation with innerfade Slideshow?

    - by Katie
    I am very new to JavaScript and only have the most basic understanding of how it works, so please bear with me. :) I'm using the jquery.innerfade.js script to create a slideshow with fade transitions for a website I'm developing, and I have added navigation buttons (which are set as background-images) that navigate between the “slides”. The navigation buttons have three states: default/off, hover, and on (each state is a separate image). I created a separate JavaScript document to set the buttons to “on” when they are clicked. The “hover” state is achieved through the CSS. Both the slideshow and the navigation buttons work well. There is just one thing I want to add: I would like the appropriate navigation button to display as “on” while the related “slide” is “playing”. Here's the HTML: <div id="mainFeature"> <ul id="theFeature"> <li id="the1feature"><a href="#" name="#promo1"><img src="_images/carousel/promo1.jpg" /></a></li> <li id="the2feature"><a href="#" name="#promo2"><img src="_images/carousel/promo2.jpg" /></a></li> <li id="the3feature"><a href="#" name="#promo3"><img src="_images/carousel/promo3.jpg" /></a></li> </ul> <div id="promonav-con"> <div id="primarypromonav"> <ul class="links"> <li id="the1title" class="promotop"><a rel="1" href="#promo1" class="promo1" id="promo1" onMouseDown="promo1on()"><strong>Botox Cosmetic</strong></a></li> <li id="the2title" class="promotop"><a rel="2" href="#promo2" class="promo2" id="promo2" onMouseDown="promo2on()"><strong>Promo 2</strong></a></li> <li id="the3title" class="promotop"><a rel="3" href="#promo3" class="promo3" id="promo3" onMouseDown="promo3on()"><strong>Promo 3</strong></a></li> </ul> </div> </div> And here is the jquery.innerfade.js, with my changes: (function($) { $.fn.innerfade = function(options) { return this.each(function() { $.innerfade(this, options); }); }; $.innerfade = function(container, options) { var settings = { 'speed': 'normal', 'timeout': 2000, 'containerheight': 'auto', 'runningclass': 'innerfade', 'children': null }; if (options) $.extend(settings, options); if (settings.children === null) var elements = $(container).children(); else var elements = $(container).children(settings.children); if (elements.length > 1) { $(container).css('position', 'relative').css('height', settings.containerheight).addClass(settings.runningclass); for (var i = 0; i < elements.length; i++) { $(elements[i]).css('z-index', String(elements.length-i)).css('position', 'absolute').hide(); }; this.ifchanger = setTimeout(function() { $.innerfade.next(elements, settings, 1, 0); }, settings.timeout); $(elements[0]).show(); } }; $.innerfade.next = function(elements, settings, current, last) { $(elements[last]).fadeOut(settings.speed); $(elements[current]).fadeIn(settings.speed, function() { removeFilter($(this)[0]); }); if ((current + 1) < elements.length) { current = current + 1; last = current - 1; } else { current = 0; last = elements.length - 1; } this.ifchanger = setTimeout((function() { $.innerfade.next(elements, settings, current, last); }), settings.timeout); }; })(jQuery); // **** remove Opacity-Filter in ie **** function removeFilter(element) { if(element.style.removeAttribute){ element.style.removeAttribute('filter'); } } jQuery(document).ready(function() { jQuery('ul#theFeature').innerfade({ speed: 1000, timeout: 7000, containerheight: '291px' }); // jQuery('#mainFeature .links').children('li').children('a').attr('href', 'javascript:void(0);'); jQuery('#mainFeature .links').children('li').children('a').click(function() { clearTimeout(jQuery.innerfade.ifchanger); for(i=1;i<5;i++) { jQuery('#the'+i+'feature').css("display", "none"); //jQuery('#the'+i+'title').children('a').css("background-color","#226478"); } // if(the_widths[(jQuery(this).attr('rel')-1)]==960) { // jQuery("#vic").hide(); // } else { // jQuery("#vic").show(); // } // jQuery('#the'+(jQuery(this).attr('rel'))+'title').css("background-color", "#286a7f"); jQuery('#the'+(jQuery(this).attr('rel'))+'feature').css("display", "block"); clearTimeout(jQuery.innerfade.ifchanger); }); }); And the separate JavaScript that I created: function promo1on() {document.getElementById("promo1").className="promo1on"; document.getElementById("promo2").className="promo2"; document.getElementById("promo2").className="promo2"; } function promo2on() {document.getElementById("promo2").className="promo2on"; document.getElementById("promo1").className="promo1"; document.getElementById("promo3").className="promo3"; } function promo3on() {document.getElementById("promo3").className="promo3on"; document.getElementById("promo1").className="promo1"; document.getElementById("promo2").className="promo2"; } And, finally, the CSS: #mainFeature {float: left; width: 672px; height: 290px; margin: 0 0 9px 0; list-style: none;} #mainFeature li {list-style: none;} #mainFeature #theFeature {margin: 0; padding: 0; position: relative;} #mainFeature #theFeature li {position: absolute; top: 0; left: 0;} #promonav-con {width: 463px; height: 26px; padding: 0; margin: 0; position: absolute; z-index: 900; top: 407px; left: 283px;} #primarypromonav {padding: 0; margin: 0;} #mainFeature .links {padding: 0; margin: 0; list-style: none; position: relative; font-family: arial, verdana, sans-serif; width: 463px; height: 26px;} #mainFeature .links li.promotop {list-style: none; display: block; float: left; display: inline; margin: 0; padding: 0;} #mainFeature .links li a {display: block; float: left; display: inline; height: 26px; text-decoration: none; margin: 0; padding: 0; cursor: pointer;} #mainFeature .links li a strong {margin-left: -9999px;} #mainFeature .links li a.promo1 {background: url(../_images/carouselnav/promo1.gif); width: 155px;} #mainFeature .links li:hover a.promo1 {background: url(../_images/carouselnav/promo1_hover.gif); width: 155px;} #mainFeature .links li a.promo1:hover {background: url(../_images/carouselnav/promo1_hover.gif); width: 155px;} .promo1on {background: url(../_images/carouselnav/promo1_on.gif); width: 155px;} #mainFeature .links li a.promo2 {background: url(../_images/carouselnav/promo2.gif); width: 153px;} #mainFeature .links li:hover a.promo2 {background: url(../_images/carouselnav/promo2_hover.gif); width: 153px;} #mainFeature .links li a.promo2:hover {background: url(../_images/carouselnav/promo2_hover.gif); width: 153px;} .promo2on {background: url(../_images/carouselnav/promo2_on.gif); width: 153px;} #mainFeature .links li a.promo3 {background: url(../_images/carouselnav/promo3.gif); width: 155px;} #mainFeature .links li:hover a.promo3 {background: url(../_images/carouselnav/promo3_hover.gif); width: 155px;} #mainFeature .links li a.promo3:hover {background: url(../_images/carouselnav/promo3_hover.gif); width: 155px;} .promo3on {background: url(../_images/carouselnav/promo3_on.gif); width: 155px;} Hopefully this makes sense! Again, I'm very new to JavaScript/JQuery, so I apologize if this is a mess. I'm very grateful for any suggestions. Thanks!

    Read the article

  • OpenIndiana (illumos): vmxnet3 interface lost on reboot

    - by protomouse
    I want my VMware vmxnet3 interface to be brought up with DHCP on boot. I can manually configure the NIC with: # ifconfig vmxnet3s0 plumb # ipadm create-addr -T dhcp vmxnet3s0/v4dhcp But after creating /etc/dhcp.vmxnet3s0 and rebooting, the interface is down and the logs show: Aug 13 09:34:15 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:15 neumann vmxnet3s: [ID 715698 kern.notice] vmxnet3s:0: stop() Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:17 neumann vmxnet3s: [ID 920500 kern.notice] vmxnet3s:0: start() Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(TxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxBufPoolLimit) -> 512 Aug 13 09:34:17 neumann nwamd[491]: [ID 605049 daemon.error] 1: nwamd_set_unset_link_properties: dladm_set_linkprop failed: operation not supported Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x20000) -> no Aug 13 09:34:17 neumann nwamd[491]: [ID 751932 daemon.error] 1: nwamd_down_interface: ipadm_delete_addr failed on vmxnet3s0: Object not found Aug 13 09:34:17 neumann nwamd[491]: [ID 819019 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv4 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 160156 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv6 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 771489 daemon.error] 1: add_ip_address: ipadm_create_addr failed on vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 405346 daemon.error] 9: start_dhcp: ipadm_create_addr failed for vmxnet3s0: Operation not supported on disabled object I then tried disabling network/physical:nwam in favour of network/physical:default. This works, the interface is brought up but physical:default fails and my network services (e.g. NFS) refuse to start. # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vmxnet3s0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:1: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:2: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:3: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:4: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:5: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:6: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:7: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:8: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 vmxnet3s0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 9000 index 2 inet6 ::/0 # cat /var/svc/log/network-physical\:default.log [ Aug 16 09:46:39 Enabled. ] [ Aug 16 09:46:41 Executing start method ("/lib/svc/method/net-physical"). ] [ Aug 16 09:46:41 Timeout override by svc.startd. Using infinite timeout. ] starting DHCP on primary interface vmxnet3s0 ifconfig: vmxnet3s0: DHCP is already running [ Aug 16 09:46:43 Method "start" exited with status 96. ] NFS server not running: # svcs -xv network/nfs/server svc:/network/nfs/server:default (NFS server) State: offline since August 16, 2012 09:46:40 AM UTC Reason: Service svc:/network/physical:default is not running because a method failed. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:default Reason: Service svc:/network/physical:nwam is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:nwam Reason: Service svc:/network/nfs/nlockmgr:default is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/network/nfs/nlockmgr:default See: man -M /usr/share/man -s 1M nfsd Impact: This service is not running. I'm new to the world of Solaris, so any help solving would be much appreciated. Thanks!

    Read the article

  • username and password for rsync in script

    - by sims
    I'm creating a cron job to keep two dirs in sync. I'm using rsync. I'm running an rsync daemon. I read the manual and it says: RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to 'nobody' I have something like: #!/bin/bash USER=name RSYNC_PASSWORD=pass DEST="server::module" /usr/bin/rsync -rltvvv . $DEST I also tried exporting (dangerous, I know) USER and RSYNC_PASSWORD. I also tried with LOGNAME. Nothing works. Am I doing this correctly?

    Read the article

  • central log-server with auditdisp

    - by johan
    I want to setup a central log-server. The log-server is running with debian 6.0.6 and the audit daemon is installed in version 1.7.13-1. The Clients are running with Red Hat 5.5 and they connect to the log-server via audispd. The connection works fine and i get all messages from each node. My questions is: is it possible that the auditd daemon from the log server write the messages from each node in a separate file? I try to transfer the messages via the syslog daemon, that works but i can not use tools like ausearch to analyze these log-files.

    Read the article

  • How to elegantly selectively exclude FreeBSD network traffic from OpenVPN interface by port

    - by Polygonica
    inexperienced sysadmin here. I'm planning on running a net daemon inside a FreeBSD jail through OpenVPN, but want to be able to SSH directly into the jail and use the daemon's web interface daemon without going through the VPN. As I understand it, an OpenVPN tunnel is normally set up as a default virtual internet interface, and so incoming traffic will go out on the OpenVPN interface by default (which is problematic, as this incurs latency). I thought "well, obviously, since all of this traffic is leaving on a handful of ports, I'll just redirect those to the non-VPN gateway." I've tried to look for solutions, but almost all of them involve iptables instead of ipfw (which is default for FreeBSD) and solve slightly different problems. And alternate solutions like using multiple default routes to ensure that incoming traffic on any interface is always sent out on the same interface seem far-reaching and require deep knowledge of all tools involved. Is there an elegant way of ensuring that traffic leaving on specific ports exits on a specified non-default interface using ipfw?

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • Difference between two ways of installing tomcat as a service (Linux)

    - by varesa
    I am installing tomcat on a linux server, and would want it to be available as a service. I have found two different ways to achieve this. The first one is to copy the daemon.sh from $CATALINA_HOME/bin to /etc/init.d, and the other one I have seen is to create a simple init script that class $CATALINA_HOME/bin/startup.sh, etc. Startup.sh calls catalina.sh. The contents of the daemon.sh and startup.sh look very similar (at least for the env variables, and stuff like that). Daemon.sh calls jsvc in the end. Catalina.sh calls java. What is the (practical) difference between using the two of these when setting up tomcat as a service?

    Read the article

  • [GEEK SCHOOL] Network Security 2: Preventing Disaster with User Account Control

    - by Ciprian Rusen
    In this second lesson in our How-To Geek School about securing the Windows devices in your network, we will talk about User Account Control (UAC). Users encounter this feature each time they need to install desktop applications in Windows, when some applications need administrator permissions in order to work and when they have to change different system settings and files. UAC was introduced in Windows Vista as part of Microsoft’s “Trustworthy Computing” initiative. Basically, UAC is meant to act as a wedge between you and installing applications or making system changes. When you attempt to do either of these actions, UAC will pop up and interrupt you. You may either have to confirm you know what you’re doing, or even enter an administrator password if you don’t have those rights. Some users find UAC annoying and choose to disable it but this very important security feature of Windows (and we strongly caution against doing that). That’s why in this lesson, we will carefully explain what UAC is and everything it does. As you will see, this feature has an important role in keeping Windows safe from all kinds of security problems. In this lesson you will learn which activities may trigger a UAC prompt asking for permissions and how UAC can be set so that it strikes the best balance between usability and security. You will also learn what kind of information you can find in each UAC prompt. Last but not least, you will learn why you should never turn off this feature of Windows. By the time we’re done today, we think you will have a newly found appreciation for UAC, and will be able to find a happy medium between turning it off completely and letting it annoy you to distraction. What is UAC and How Does it Work? UAC or User Account Control is a security feature that helps prevent unauthorized system changes to your Windows computer or device. These changes can be made by users, applications, and sadly, malware (which is the biggest reason why UAC exists in the first place). When an important system change is initiated, Windows displays a UAC prompt asking for your permission to make the change. If you don’t give your approval, the change is not made. In Windows, you will encounter UAC prompts mostly when working with desktop applications that require administrative permissions. For example, in order to install an application, the installer (generally a setup.exe file) asks Windows for administrative permissions. UAC initiates an elevation prompt like the one shown earlier asking you whether it is okay to elevate permissions or not. If you say “Yes”, the installer starts as administrator and it is able to make the necessary system changes in order to install the application correctly. When the installer is closed, its administrator privileges are gone. If you run it again, the UAC prompt is shown again because your previous approval is not remembered. If you say “No”, the installer is not allowed to run and no system changes are made. If a system change is initiated from a user account that is not an administrator, e.g. the Guest account, the UAC prompt will also ask for the administrator password in order to give the necessary permissions. Without this password, the change won’t be made. Which Activities Trigger a UAC Prompt? There are many types of activities that may trigger a UAC prompt: Running a desktop application as an administrator Making changes to settings and files in the Windows and Program Files folders Installing or removing drivers and desktop applications Installing ActiveX controls Changing settings to Windows features like the Windows Firewall, UAC, Windows Update, Windows Defender, and others Adding, modifying, or removing user accounts Configuring Parental Controls in Windows 7 or Family Safety in Windows 8.x Running the Task Scheduler Restoring backed-up system files Viewing or changing the folders and files of another user account Changing the system date and time You will encounter UAC prompts during some or all of these activities, depending on how UAC is set on your Windows device. If this security feature is turned off, any user account or desktop application can make any of these changes without a prompt asking for permissions. In this scenario, the different forms of malware existing on the Internet will also have a higher chance of infecting and taking control of your system. In Windows 8.x operating systems you will never see a UAC prompt when working with apps from the Windows Store. That’s because these apps, by design, are not allowed to modify any system settings or files. You will encounter UAC prompts only when working with desktop programs. What You Can Learn from a UAC Prompt? When you see a UAC prompt on the screen, take time to read the information displayed so that you get a better understanding of what is going on. Each prompt first tells you the name of the program that wants to make system changes to your device, then you can see the verified publisher of that program. Dodgy software tends not to display this information and instead of a real company name, you will see an entry that says “Unknown”. If you have downloaded that program from a less than trustworthy source, then it might be better to select “No” in the UAC prompt. The prompt also shares the origin of the file that’s trying to make these changes. In most cases the file origin is “Hard drive on this computer”. You can learn more by pressing “Show details”. You will see an additional entry named “Program location” where you can see the physical location on your hard drive, for the file that’s trying to perform system changes. Make your choice based on the trust you have in the program you are trying to run and its publisher. If a less-known file from a suspicious location is requesting a UAC prompt, then you should seriously consider pressing “No”. What’s Different About Each UAC Level? Windows 7 and Windows 8.x have four UAC levels: Always notify – when this level is used, you are notified before desktop applications make changes that require administrator permissions or before you or another user account changes Windows settings like the ones mentioned earlier. When the UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This is the most secure and also the most annoying way to set UAC because it triggers the most UAC prompts. Notify me only when programs/apps try to make changes to my computer (default) – Windows uses this as the default for UAC. When this level is used, you are notified before desktop applications make changes that require administrator permissions. If you are making system changes, UAC doesn’t show any prompts and it automatically gives you the necessary permissions for making the changes you desire. When a UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This level is slightly less secure than the previous one because malicious programs can be created for simulating the keystrokes or mouse moves of a user and change system settings for you. If you have a good security solution in place, this scenario should never occur. Notify me only when programs/apps try to make changes to my computer (do not dim my desktop) – this level is different from the previous in in the fact that, when the UAC prompt is shown, the desktop is not dimmed. This decreases the security of your system because different kinds of desktop applications (including malware) might be able to interfere with the UAC prompt and approve changes that you might not want to be performed. Never notify – this level is the equivalent of turning off UAC. When using it, you have no protection against unauthorized system changes. Any desktop application and any user account can make system changes without your permission. How to Configure UAC If you would like to change the UAC level used by Windows, open the Control Panel, then go to “System and Security” and select “Action Center”. On the column on the left you will see an entry that says “Change User Account Control settings”. The “User Account Control Settings” window is now opened. Change the position of the UAC slider to the level you want applied then press “OK”. Depending on how UAC was initially set, you may receive a UAC prompt requiring you to confirm this change. Why You Should Never Turn Off UAC If you want to keep the security of your system at decent levels, you should never turn off UAC. When you disable it, everything and everyone can make system changes without your consent. This makes it easier for all kinds of malware to infect and take control of your system. It doesn’t matter whether you have a security suite or antivirus installed or third-party antivirus, basic common-sense measures like having UAC turned on make a big difference in keeping your devices safe from harm. We have noticed that some users disable UAC prior to setting up their Windows devices and installing third-party software on them. They keep it disabled while installing all the software they will use and enable it when done installing everything, so that they don’t have to deal with so many UAC prompts. Unfortunately this causes problems with some desktop applications. They may fail to work after you enable UAC. This happens because, when UAC is disabled, the virtualization techniques UAC uses for your applications are inactive. This means that certain user settings and files are installed in a different place and when you turn on UAC, applications stop working because they should be placed elsewhere. Therefore, whatever you do, do not turn off UAC completely! Coming up next … In the next lesson you will learn about Windows Defender, what this tool can do in Windows 7 and Windows 8.x, what’s different about it in these operating systems and how it can be used to increase the security of your system.

    Read the article

  • unable to access usb device.

    - by Tom
    Hi everyone, I'm reading my boot logs, at /var/log trying to understand why the boot process is taking so long. I found that the system can't access many usb devices, but can't understand why. Is there a way to stop Ubuntu from trying to access them? Here are the lines: /var/log# grep -r "usb_id" . ./boot.log:usb_id[716]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/mouse1' ./boot.log:usb_id[721]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./boot.log:usb_id[725]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' Any help will be greatly appreciated. Thanks in advance.

    Read the article

  • Difficulty Mounting Volumes on a Partitioned External HD

    - by Todd
    I'm having a great deal of difficulty with an external hard drive. I'm currently running a dual boot system (XP Service Pack 3 and Ubuntu 11.04 Natty Narwahl) on a Dell Inspiron B120. I'm trying to set up a new 80 GB Hitachi external HD. Using GParted, I formatted the drive and set up the partitions. The partitioning scheme is as follows 10GB NTFS Primary, 2GB Linux-Swap Primary, 50GB FAT32 Primary, 12GB Unallocated. After applying those changes, I went into Disk Utility and the HD appears along with the correct partitions. When I try to mount the volumes for partitions 1 and 3, I get a pop-up stating: Error Mounting Volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. When I try to to check the filesystem I get a pop-up stating: Error Checking filesystem on volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. Throughout the time that I'm attempting to troubleshoot the problem, the external drive light is on and blinking. With my frustration hitting a boiling point, I try to shut down the drive and remove it so that I can plug in a different external HD that works PERFECTLY. However, when I try to shut down and safely remove the drive, I get a pop-up stating: Error Detaching Drive An error occurred while performing an operation on "80GB Hard Disk" (HTS548080m9AT00): The daemon is being inhibited. Can anyone tell me what I'm doing wrong? I'm a newbie and not that skilled with terminal commands, so please dumb it down for me if you request specific command output.

    Read the article

  • unity screensaver ? - plans for unity

    - by gare
    Are there plans to develop a screensaver specifically for Unity? Since Unity uses gnome-screensaver by default currently, seems like a reasonable area for a desktop to attract much user appreciation. (My personal interest is in a nice Pictures folder type slideshow screensaver with more settings than currently offered in gnome-screensaver - ie folder specification, and duration/delay of slide.) Thank you.

    Read the article

  • Unable to set background image using python (2.7.3), bash and gnome3

    - by malon
    #!/usr/bin/env python import os bashCommand = "gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/" + pic_name print bashCommand os.system(bashCommand) Print result: gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/wallpaper-1252048.jpg Copying and pasting the print result into a terminal makes the change successfully, so the command is correct, but os.system isn't processing the request correctly for some reason. In the full script (posted below), I use os.system for a different reason immediately before (wget) and that works fine. Thank you! Full script: http://pastebin.com/R90GTmBZ

    Read the article

  • Keep Xubuntu Network Manager from overwriting resolv.conf

    - by leeand00
    How do I keep Xubuntu 11.10 from overwriting resolv.conf everytime I reboot my machine? Everytime I reboot, I get an overwritten resolv.conf that has the words # Generated by Network Manager and no nameservers specified. I ran the following to get rid of Network Manager, but it's still replacing my resolv.conf when I restart the machine. sudo apt-get --purge remove network-manager sudo apt-get --purge remove network-manager-gnome sudo apt-get --purge remove network-manager-pptp sudo apt-get --purge remove network-manager-pptp-gnome

    Read the article

  • Automatic login vs. manual login and screensaver lock

    - by Erik Johansson
    Is there a way to prevent a command from running when I login manually, but having it run when the computer starts up and GDM automatically logs me in. This is the setup: in the Gnome "on start programs" settings I have a command that locks the screen gnome-screensaver-command -l I have automatic login turned on. That means that the screen will be locked when I turn on the computer, but it will also be locked when I manually login from GDM, is there a way to prevent this?

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • Advanced System Monitor/Task Manager?

    - by instanceofTom
    When using kubuntu I noticed that the standard task manager/system monitor was a bit more capable than gnome-system-monitor, is there a more advanced system/task monitor for ubuntu that is based on gnome opposed to KDE? Specifically the features from the Kubuntu task manager that I am looking for are the ability to control the I/O priority of individual processes (not just their nice), and the ability to control the I/O scheduling algorithm ( round-robin, FIFO, etc). What are my options?

    Read the article

  • What is difference between install desktop-environment and run directly distro?

    - by Pandya
    My question is what is difference between installing perticular desktop environment on Ubuntu And Using directly that (Default -environmented) distro/flavour of Ubuntu? Example: Two options: Install ubuntu-gnome-desktop or kubuntu-desktop or xubuntu-desktopetc. (official & recognized derivatives) alternatively on Ubuntu (Default -Unity Desktop) Use (Run) perticular distro/flavour Ubuntu-Gnome or Kubuntu or Xubuntu etc. I want to know is both method working same performance? and which is proper method to use Desktop Environment.

    Read the article

  • Ubuntu 12.04 freeze on Ctrl+ALT+F1

    - by user93800
    I have Radeon card connected over HDMI to my TV. To resolve bug with HDMI-sound, I switch between virtual terminal and GNOME desktop each time the TV is switched on with CTRL+ALT+F1 and after console appear with CTRL+ALT+F7. It's ugly, but its worked for me. Since yesterday I got system freezes on CTRL+ALT+F1 - no reaction on keyboard, mouse pointer disappear at all, GNOME desktop still displayed. Probably has nothing to do with HDMI. Any suggestion how to resolve it?

    Read the article

  • Byobu not displaying correct CPU temptrature

    - by aserwin
    I have an AMD FX 8120 proccy that is overclocked to 4100Mhz. Since the overclocking, Byobu and other temprature reading apps (Conky, etc) do not read the temprature accurately. I can see the correct temp in the bios, and with no overclocking everything inside of Gnome reads correctly. Why should this be? It is (seemingly) obviously an issue with Ubuntu (or perhaps Gnome?). Has anyone else experienced this?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >