Search Results

Search found 881 results on 36 pages for 'verbose'.

Page 31/36 | < Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • WSUS 3.0 SP2 installation fails at "configuring database" step.

    - by flashkube
    Attempting to install WSUS 3.0 SP2 on a Windows Server 2003 Enterprise system. I'm asking the setup to create a new database on one of our existing SQL Server 2005 systems. When the setup gets to the "configuring database" step it stops and throws "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." The two logs it suggests I look at are below. I'm not seeing any errors that mean anything to me. Any direction you can give will be greatly appreciated. WSUSSetup.log: 2009-12-04 15:26:21 Success MWUSSetup Validating pre-requisites... 2009-12-04 15:26:22 Error MWUSSetup Failed to determine if an higher version of WSUS is installed. Assuming it is not... (Error 0x80070002: The system cannot find the file specified.) 2009-12-04 15:26:28 Success MWUSSetup No SQL instances found 2009-12-04 15:26:42 Success MWUSSetup Initializing installation details 2009-12-04 15:26:42 Success MWUSSetup Installing ASP.Net 2009-12-04 15:27:24 Success MWUSSetup ASP.Net is installed successfully 2009-12-04 15:27:24 Success MWUSSetup Installing WSUS... 2009-12-04 15:27:28 Success CustomActions.Dll Unable to get INSTALL_LANGUAGE property, calculating it... 2009-12-04 15:27:28 Success CustomActions.Dll Successfully set propery of WSUS admin groups' full names 2009-12-04 15:27:29 Success CustomActions.Dll .Net framework path: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Reporters with Description: WSUS Administrators who can only run reports on the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Reporters user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Administrators with Description: WSUS Administrators can administer the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Administrators user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS user groups 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Successfully set binary SID properties 2009-12-04 15:28:50 Error MWUSSetup InstallWsus: MWUS Installation Failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CInstallDriver::PerformSetup: WSUS installation failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CSetupDriver::LaunchSetup: Setup failed (Error 0x80070643: Fatal error during installation.) From the end of WSUSSetupmsi_091204_1527.log MSI (s) (58:7C) [15:28:49:860]: Note: 1: 1708 MSI (s) (58:7C) [15:28:49:860]: Product: Windows Server Update Services 3.0 SP2 -- Installation failed. MSI (s) (58:7C) [15:28:49:875]: Cleaning up uninstalled install packages, if any exist MSI (s) (58:7C) [15:28:49:875]: MainEngineThread is returning 1603 MSI (s) (58:78) [15:28:49:985]: Destroying RemoteAPI object. MSI (s) (58:90) [15:28:49:985]: Custom Action Manager thread ending. === Logging stopped: 12/4/2009 15:28:49 === MSI (c) (30:54) [15:28:50:016]: Decrementing counter to disable shutdown. If counter = 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (30:54) [15:28:50:016]: MainEngineThread is returning 1603 === Verbose logging stopped: 12/4/2009 15:28:50 ===

    Read the article

  • SSH-forwarded X11 display from Linux to Mac lost after some time

    - by mklein9
    I have a new and vexing problem with ssh forwarding my X11 connection when logging in from a Mac (10.7.2) to Linux (Ubuntu 8.04). I have no trouble using ssh -X to log in to the remote machine and starting an X11-based application from that shell. What has recently started happening is that additional invocations of X11 applications from that same shell, after a while (on the order of hours), are unable to start because the forwarded display is being blocked (I presume). When attempting to start xterm, for example, I get the usual message about a bad DISPLAY setting, such as: xterm Xt error: Can't open display: localhost:10.0 But the X11 application I started right when I logged in is still running along just fine, using that exact same display (localhost:10.0), just that it was started earlier. I turned on verbose logging in sshd_config and I see this in the /var/log/auth.log file in response to the failed xterm startup attempt: sshd[22104]: channel 8: open failed: administratively prohibited: open failed If I ssh -X to the server again, starting a new shell and getting assigned a new display (localhost:11.0), the same process repeats: the X11 applications started early on run just fine for as long as I keep them open (days), but after a few hours I cannot start any new ones from that shell. Particulars: OpenSSH sshd server running on Ubuntu 8.04, display forwarded to a Mac running Lion (10.7.2) with the default Apple X server. The systems are connected on an Ethernet LAN with a single switch between them. Neither machine is running a firewall. Until recently (a few days ago) this setup worked perfectly so I am mystified as to where to look next. I am by no means an X11 or SSH expert but have good UNIX/Linux experience. Nothing obvious has changed in either client or server configuration although I have tried changing a few options to try to debug this, like setting sshd_config's TCPKeepAlive to no, and setting "host +localhost" (you can tell I've been Googling). When logging in from a Linux 11.10 laptop to the same remote host over the same network and switch, this problem does not occur -- an xterm can be invoked successfully hours later from the same ssh login shell while the same experiment from the Mac fails (tested this morning to be sure), so it would appear to be a Mac-specific issue. With "LogLevel DEBUG3" set on the remote machine (sshd server), and no change made in the client connections by me, /var/log/auth.log shows one slight change in connection status reports overnight, which is the port number used by the one successful ssh session from the Linux machine (I think), connection #7 below: sshd[20173]: debug3: channel 7: status: The following connections are open:\r\n #0 server-session (t4 r0 i0/0 o0/0 fd 14/13 cfd -1)\r\n #3 X11 connection from 127.0.0.1 port 57564 (t4 r1 i0/0 o0/0 fd 16/16 cfd -1)\r\n #4 X11 connection from 127.0.0.1 port 57565 (t4 r2 i0/0 o0/0 fd 17/17 cfd -1)\r\n #5 X11 connection from 127.0.0.1 port 57566 (t4 r3 i0/0 o0/0 fd 18/18 cfd -1)\r\n #6 X11 connection from 127.0.0.1 port 57567 (t4 r4 i0/0 o0/0 fd 19/19 cfd -1)\r\n #7 X11 connection from 127.0.0.1 port 59007 In this report, everything is the same between status reports except the port number used by connection #7 which I believe is the Linux client -- the only one still maintaining a display connection. It continues to increment over time, judging by a sequence of these reports overnight. Thanks for any help, -Mike

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • How could I stop ssh offering a wrong key?

    - by Alvaro Maceda
    (This is a problem with ssh, not gitolite) I've configured gitolite on my home server (ubuntu 12.04 server, open-ssh). I want an special identityfile to administer the repositories, so I need to access throught ssh to my own host ussing two different identity keys. This is the content of my .ssh/config file: Host gitadmin.gammu.com User git IdentityFile /home/alvaro/.ssh/id_gitolite_mantra Host git.gammu.com User git IdentityFile /home/alvaro/.ssh/id_alvaro_mantra This is the content of my hosts file: # Git 127.0.0.1 gitadmin.gammu.com 127.0.0.1 git.gammu.com So I should be able to communicate with gitolite this way to access with the "normal" account: $ssh git.gammu.com and this way to access with the administrative account: $ssh gitadmin.gammu.com When I try to access with the normal account, all is ok: alvaro@mantra:~/.ssh$ ssh git.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to git.gammu.com closed. When I do the same with the administrative account: alvaro@mantra:~$ ssh gitadmin.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to gitadmin.gammu.com closed. It should show the administrative repository. If I launch ssh with verbose option: ssh -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7f7cb6c0fbc0) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7f7cb6c044d0) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 ... It's offering the key id_alvaro_mantra, and it should'nt!! The same happens when I specify the key with the -i option: ssh -i /home/alvaro/.ssh/id_gitolite_mantra -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7fa365237f90) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365230550) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365231050) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug3: sign_and_send_pubkey: RSA 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug1: Authentication succeeded (publickey). ... What the hell is happening??? I'm missing something, but I can't find what. These are the contents of my home dir: -rw-rw-r-- 1 alvaro alvaro 395 nov 14 18:00 authorized_keys -rw-rw-r-- 1 alvaro alvaro 326 nov 21 10:21 config -rw------- 1 alvaro alvaro 137 nov 20 20:26 environment -rw------- 1 alvaro alvaro 1766 nov 20 21:41 id_alvaromaceda.es -rw-r--r-- 1 alvaro alvaro 404 nov 20 21:41 id_alvaromaceda.es.pub -rw------- 1 alvaro alvaro 1766 nov 14 17:59 id_alvaro_mantra -rw-r--r-- 1 alvaro alvaro 395 nov 14 17:59 id_alvaro_mantra.pub -rw------- 1 alvaro alvaro 771 nov 14 18:03 id_developer_mantra -rw------- 1 alvaro alvaro 1679 nov 20 12:37 id_dos_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:37 id_dos_pruebasgit.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:46 id_gitolite_mantra -rw-r--r-- 1 alvaro alvaro 397 nov 20 12:46 id_gitolite_mantra.pub -rw------- 1 alvaro alvaro 1675 nov 20 21:44 id_gitpruebas.es -rw-r--r-- 1 alvaro alvaro 408 nov 20 21:44 id_gitpruebas.es.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:34 id_uno_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:34 id_uno_pruebasgit.pub -rw-r--r-- 1 alvaro alvaro 2434 nov 21 10:11 known_hosts There are a bunch of other keys which aren't offered... why id_alvaro_mantra is offered and not the other keys? I can't understand. I need some help, don't know where to look....

    Read the article

  • Converting projects to use Automatic NuGet restore

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/11/converting-projects-to-use-automatic-nuget-restore.aspxDownload tool In version 2.7 of NuGet automatic nuget restore was introduced, meaning you no longer need to distort your msbuild project files with nuget target information.   Visual Studio and TFS 2013 build have this enabled by default.  However, if your project was created before this was introduced, and/or if you have used the “Enable NuGet Package Restore” afterwards, you now have a series of unwanted things in your projects, and a series of project files that have been modified – and – you no longer neither want nor need this !  You might also get into some unwanted issues due to these modifications.  This is a MSBuild modification that was needed only before NuGet 2.7 ! So: DON’T USE THIS FUNCTION !!! There is an issue https://nuget.codeplex.com/workitem/4019 on this on the NuGet project site to get this function removed, renamed or at least moved farther away from the top level (please help vote it up!).  The response seems to be that it WILL BE removed, around version 3.0. This function does nothing you need after the introduction of NuGet 2.7.  What is also unfortunate is the naming of it – it implies that it is needed, it is not, and what is worse, there is no corresponding function to remove what it does ! So to fix this use the tool named IFix, that will fix this issue for you   - all free of course, and the code is open source.  Also report issues there:  https://github.com/OsirisTerje/IFix    IFix information DOWNLOAD HERE This command line tool installs using an MSI, and add itself to the system path.  If you work in a team, you will probably need to use the  tool multiple times.  Anyone in the team may at any time use the “Enable NuGet Package Restore” function and mess up your project again.  The IFix program can be run either in a  check modus, where it does not write anything back – it only checks if you have any issues, or in a Fix mode, where it will also perform the necessary fixes for you. The IFix program is used like this: IFix <command> [-c/--check] [-f/--fix]  [-v/--verbose] The command in this case is “nugetrestore”.  It will do a check from the location where it is being called, and run through all subfolders from that location. So  “IFix nugetrestore  --check” , will do the check ,  and “IFix nugetrestore  --fix”  will perform the changes, for all files and folders below the current working directory. (Note that --check  can be replaced with only –c, and --fix with –f, and so on. ) BEWARE: When you run the fix option, all solutions to be affected must be closed in Visual Studio ! So, if you just want to DO it, then: IFix nugetrestore --check to see if you have issues then IFix nugetrestore  --fix to fix them. How does it work IFix nugetrestore  checks and optionally fixes four issues that the older enabling of nuget restore did.  The issues are related to the MSBuild projess, and are: Deleting the nuget.targets file. Deleting the nuget.exe that is located under the .nuget folder Removing all references to nuget.targets in the solution file Removing all properties and target imports of nuget.targets inside the csproj files. IFix fixes these issues in the same sequence. The first step, removing the nuget.targets file is the most critical one, and all instances of the nuget.targets file within the scope of a solution has to be removed, and in addition it has to be done with the solution closed in Visual Studio.  If Visual Studio finds a nuget.targets file, the csproj files will be automatically messed up again. This means the removal process above might need to be done multiple times, specially when you’re working with a team, and that solution context menu still has the “Enable NuGet Package Restore” function.  Someone on the team might inadvertently do this at any time. It can be a good idea to add this check to a checkin policy – if you run TFS standard version control, but that will have no effect if you use TFS Git version control of course. So, better be prepared to run the IFix check from time to time. Or, even better, install IFix on your build servers, and add a call to IFix nugetrestore --check in the TFS Build script.    How does it look As a first example I have run the IFix program from the top of a set of git repositories, so it spans multiple repositories with multiple solutions. The result from the check option is as follows: We see the four red lines, there is one for each of the four checks we talked about in the previous section. The fact that they are red, means we have that particular issue. The first section (above the first red text line) is the nuget targets section.  Notice  No.1, it says it has found no paths to copy.  What IFix does here is to check if there are any defined paths to other nuget galleries.  If there are, then those are copied over to the nuget.config file, where is where it should be in version 2.7 and above.   No.2 says it has found the particular nuget.targets file,  No.3  states it HAS found some other nuget galleries defines in the targets file, which then it would like to copy to the config.file. No.4 is the section for nuget.exe files, and list those it has found, and which it would like to delete. No 5 states it has found a reference to nuget.targets in the solution file.  This reference comes from the fact that the .nuget folder is a solution folder, and the items within are described in the solution file. It then checks the csproj files, and as can be seen from the last red line, it ha found issues in 96 out of 198 csproj files.  There are two possible issues in a csproj files.  No.6 is the first one, and the most common and most important one, an “Import project” section.  This is the section that calls the nuget.targets files.  No.7 is another issue, which seems to sometimes be there, sometimes not, it is a RestorePackages property, which also should go away. Now, if we run the IFix nugetrestore –fix command, and then the check again after that, the result is: All green !

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • Linux Kernel not passing through multicast UDP packets

    - by buecking
    Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1 driver: bnx2 version: 2.0.2 firmware-version: 5.0.11 NCSI 2.0.5 bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 2: eth0 link 33:33:ff:40:c6:ad link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 224.0.0.1 inet6 ff02::1:ff40:c6ad inet6 ff02::1 3: eth1 link 01:00:5e:25:36:47 link 01:00:5e:25:36:3e link 01:00:5e:25:36:3d link 33:33:ff:40:c6:af link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 233.37.54.71 <------- McastGroup. inet 224.0.0.1 inet6 ff02::1:ff40:c6af inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes 09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268 09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 ... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack rx_mcast_packets: 103998 tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket' s = UDPSocket.new s.bind("", 15572) 5.times do text, sender = s.recvfrom(2) puts text end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket' => true irb(main):002:0> s = UDPSocket.new => #<UDPSocket:0x7f3ccd6615f0> irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4654 <--------- Same as below OutMcastPkts: 3426 InBcastPkts: 9854 InOctets: -1691733021 OutOctets: 51187936 InMcastOctets: 145207 OutMcastOctets: 109680 InBcastOctets: 1246341 IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4656 <-------------- Same as above OutMcastPkts: 3427 InBcastPkts: 9854 InOctets: -1690886265 OutOctets: 51188788 InMcastOctets: 145267 OutMcastOctets: 109712 InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server CONFIG_IP_MULTICAST=y Any thoughts on where to go from here?

    Read the article

  • bash: per-command history. How does it work?

    - by romainl
    OK. I have an old G5 running Leopard and a Dell running Ubuntu 10.04 at home and a MacPro also running Leopard at work. I use Terminal.app/bash a lot. On my home G5 it exhibits a nice feature: using ? to navigate history I get the last command starting with the few letters that I've typed. This is what I mean (| represents the caret): $ ssh user@server $ vim /some/file/just/to/populate/history $ ss| So, I've typed the two first letters of "ssh", hitting ? results in this: $ ssh user@server instead of this, which is the behaviour I get everywhere else : $ vim /some/file/just/to/populate/history If I keep on hitting ? or ?, I can navigate through the history of ssh like this: $ ssh otheruser@otherserver $ ssh user@server $ ssh yetanotheruser@yetanotherserver It works the same for any command like cat, vim or whatever. That's really cool. Except that I have no idea how to mimic this behaviour on my other machines. Here is my .profile: export PATH=/Developer/SDKs/flex_sdk_3.4/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/sw/bin:/sw/sbin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:$HOME/Applications/bin:/usr/X11R6/bin export MANPATH=/usr/local/share/man:/usr/local/man:opt/local/man:sw/share/man export INFO=/usr/local/share/info export PERL5LIB=/opt/local/lib/perl5 export PYTHONPATH=/opt/local/bin/python2.7 export EDITOR=/opt/local/bin/vim export VISUAL=/opt/local/bin/vim export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=01;34:ln=target:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.deb=00;31:*.rpm=00;31:*.TAR=00;31:*.TGZ=00;31:*.ARJ=00;31:*.TAZ=00;31:*.LZH=00;31:*.ZIP=00;31:*.Z=00;31:*.Z=00;31:*.GZ=00;31:*.BZ2=00;31:*.DEB=00;31:*.RPM=00;31:*.jpg=00;35:*.png=00;35:*.gif=00;35:*.bmp=00;35:*.ppm=00;35:*.tga=00;35:*.xbm=00;35:*.xpm=00;35:*.tif=00;35:*.png=00;35:*.fli=00;35:*.gl=00;35:*.dl=00;35:*.psd=00;35:*.JPG=00;35:*.PNG=00;35:*.GIF=00;35:*.BMP=00;35:*.PPM=00;35:*.TGA=00;35:*.XBM=00;35:*.XPM=00;35:*.TIF=00;35:*.PNG=00;35:*.FLI=00;35:*.GL=00;35:*.DL=00;35:*.PSD=00;35:*.mpg=00;36:*.avi=00;36:*.mov=00;36:*.flv=00;36:*.divx=00;36:*.qt=00;36:*.mp4=00;36:*.m4v=00;36:*.MPG=00;36:*.AVI=00;36:*.MOV=00;36:*.FLV=00;36:*.DIVX=00;36:*.QT=00;36:*.MP4=00;36:*.M4V=00;36:*.txt=00;32:*.rtf=00;32:*.doc=00;32:*.odf=00;32:*.rtfd=00;32:*.html=00;32:*.css=00;32:*.js=00;32:*.php=00;32:*.xhtml=00;32:*.TXT=00;32:*.RTF=00;32:*.DOC=00;32:*.ODF=00;32:*.RTFD=00;32:*.HTML=00;32:*.CSS=00;32:*.JS=00;32:*.PHP=00;32:*.XHTML=00;32:' export LC_ALL=C export LANG=C stty cs8 -istrip -parenb bind 'set convert-meta off' bind 'set meta-flag on' bind 'set output-meta on' alias ip='curl http://www.whatismyip.org | pbcopy' alias ls='ls -FhLlGp' alias la='ls -AFhLlGp' alias couleurs='$HOME/Applications/bin/colors2.sh' alias td='$HOME/Applications/bin/todo.sh' alias scale='$HOME/Applications/bin/scale.sh' alias stree='$HOME/Applications/bin/tree' alias envoi='$HOME/Applications/bin/envoi.sh' alias unfoo='$HOME/Applications/bin/unfoo' alias up='cd ..' alias size='du -sh' alias lsvn='svn list -vR' alias jsc='/System/Library/Frameworks/JavaScriptCore.framework/Versions/A/Resources/jsc' alias asl='sudo rm -f /private/var/log/asl/*.asl' alias trace='tail -f $HOME/Library/Preferences/Macromedia/Flash\ Player/Logs/flashlog.txt' alias redis='redis-server /opt/local/etc/redis.conf' source /Users/johncoltrane/Applications/bin/git-completion.sh export GIT_PS1_SHOWUNTRACKEDFILES=1 export GIT_PS1_SHOWUPSTREAM="verbose git" export GIT_PS1_SHOWDIRTYSTATE=1 export PS1='\n\[\033[32m\]\w\[\033[0m\] $(__git_ps1 "[%s]")\n\[\033[1;31m\]\[\033[31m\]\u\[\033[0m\] $ \[\033[0m\]' mkcd () { mkdir -p "$*" cd "$*" } function cdl { cd $1 la } n() { $EDITOR ~/Dropbox/nv/"$*".txt } nls () { ls -c ~/Dropbox/nv/ | grep "$*" } copy(){ curl -s -F 'sprunge=<-' http://sprunge.us | pbcopy } if [ -f /opt/local/etc/profile.d/cdargs-bash.sh ]; then source /opt/local/etc/profile.d/cdargs-bash.sh fi if [ -f /opt/local/etc/bash_completion ]; then . /opt/local/etc/bash_completion fi Any idea?

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • Error during GENERAL_REQUEST_ENTITY for POST results in ASP .NET session state never getting unlocked

    - by Jesse
    I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should. I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics: They are all 'POST' requests where the browser is posting data to be processed/saved. They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected. They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data. At some point during the 'read entity' related events and error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)". Once the error has been encountered, they all jump directly to the END_REQUEST events. The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module. My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the RELEASE_REQUEST_STATE event. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also hit RELEASE_REQUEST_STATE as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point. Are there any configuration changes I could make to help ensure that 'RELEASE_REQUEST_STATE' is fired under all error conditions?

    Read the article

  • SQL2008R2 install issues on windows 7 - unable to install setup support files?

    - by Liam
    I am trying to install the above but am getting the following errors when its attempting to install the setup support files, This is the first error that occurs during installation of the setup support files TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDF039760%25401201%25401 This is the second error that occurs after clicking continue in the installer after the first error is generated TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: SQL Server Setup has encountered an error when running a Windows Installer file. Windows Installer error message: The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer file: C:\Users\watto_uk\Desktop\In-Digital\Software\Microsoft\SQL Server 2008 R2\1033_ENU_LP\x64\setup\sqlsupport_msi\SqlSupport.msi Windows Installer log file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110713_205508\SqlSupport_Cpu64_1_ComponentUpdate.log Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDC80C325 These errors are generated from an ISO package downloaded from Microsoft. I have also tried using the web platform installer to install the express version instead but the SQL Server Installation fails with that also. The management studio installs fine but not the server. I have checked to make sure that the Windows Installer is started and it is. Cant seem to find an answer for this anywhere as all previous reported issues appear to be related to XP. I did have the express edition installed on the machine previously but uninstalled it to upgrade to the full version, I wish I hadn't now. Can anyone kindly offer any advice or point me in the right direction to stop me going insane with this? Any advice will be appreciated. Update======================= After digging a bit deeper ive located details of the error from the setup log file, i can also upload the log file if required. MSI (s) (E8:28) [23:35:18:705]: Assembly Error:The module '%1' was expected to contain an assembly manifest. MSI (s) (E8:28) [23:35:18:705]: Note: 1: 1935 2: 3: 0x80131018 4: IStream 5: Commit 6: MSI (s) (E8:28) [23:35:18:705]: Note: 1: 2337 2: 0 3: Microsoft.SqlServer.GridControl.dll MSI (s) (E8:28) [23:35:22:869]: Product: Microsoft SQL Server 2008 R2 Setup (English) -- Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:916]: Internal Exception during install operation: 0xc0000005 at 0x000007FEE908A23E. MSI (s) (E8:28) [23:35:22:916]: WER report disabled for silent install. MSI (s) (E8:28) [23:35:22:932]: Internal MSI error. Installer terminated prematurely. Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:932]: MainEngineThread is returning 1603 MSI (s) (E8:58) [23:35:22:932]: RESTART MANAGER: Session closed. Installer stopped prematurely. MSI (c) (0C:14) [23:35:22:947]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (0C:14) [23:35:22:947]: MainEngineThread is returning 1601 === Verbose logging stopped: 13/07/2011 23:35:22 ===

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • VirtualServer reverseproxy works locally, but not from client

    - by Yep
    Setup: 2 Webservers pointed to 127.0.0.1:8080 and :8081. Curl validates they work as expected. Apache with the following virt hosts: NameVirtualHost 192.168.1.1:80 <VirtualHost 192.168.1.1:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ServerName 192.168.1.1 ServerAlias http://192.168.1.1 </VirtualHost> NameVirtualHost 192.168.1.2:80 <VirtualHost 192.168.1.2:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8081/ ProxyPassReverse / http://127.0.0.1:8081/ ServerName 192.168.1.2 ServerAlias http://192.168.1.2 </VirtualHost> On the server I can curl to the virtualhosts and receive appropriate responses. (curl 192.168.1.1 gives me the webservers response from localhost:8080, etc) remote hosts cannot however connect to 192.168.1.1 or .2 at all. What am I missing? Re: comments Yes, the default directory Directive is still in place. # Deny access to root file system <Directory /> Options None AllowOverride None Order Deny,Allow deny from all </Directory> No apache logs are generated when trying to reach 192.168.1.1 remotely. They do get generated when curl from local. If I point the webservers to *:8080 and *:8081 instead of binding to localhost, I can access them from a remote host via 192.168.1.1 and 192.168.1.2 if i specify the 8080 and 8081 ports (both ports work on both IP's, which is what I'm trying to avoid with apache reverse proxy bind to 80 on each interface) Edit2: curl verbose output: (similar for second webserver, and for 127.0.0.1:portnum) [user@host mingle_12_2_1]$ curl -v 192.168.1.1 * About to connect() to 192.168.1.1 port 80 * Trying 192.168.1.1... connected * Connected to 192.168.1.1 (192.168.1.1) port 80 > GET / HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: 192.168.1.1 > Accept: */* > < HTTP/1.1 302 Found < Date: Tue, 16 Oct 2012 16:22:08 GMT < Server: Jetty(6.1.19) < Cache-Control: no-cache < Location: http://192.168.1.1/install < X-Runtime: 130 < Content-Type: text/html; charset=utf-8 < Content-Length: 94 < Connection: close Closing connection #0 <html><body>You are being <a href="http://192.168.1.1/install">redirected</a>.</body></html> log from the request local 192.168.1.1 - - [16/Oct/2012:12:22:08 -0400] "GET / HTTP/1.1" 302 94 no apache access log or error log generated when requests from remote clients.

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by user135501
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start

    Read the article

  • Connect to VPN from Mac on Time Capsule network

    - by Lou Franco
    I have a few clients on my network that can connect to my work VPN (Windows PPTP) when they are not on my home network. On my home network (Cable Modem with Time Capsule providing Wifi), it fails very early -- looks like it can't even establish a connection. Logs just say that it failed -- even verbose logs don't have much: I redacted the host and IP from this log, but I can ping it. Wed Feb 2 14:32:41 2011 : PPTP connecting to server 'XXX.XXX.com' (XXX.XX.XX.XX)... Wed Feb 2 14:32:41 2011 : PPTP connection established. Wed Feb 2 14:32:41 2011 : using link 0 Wed Feb 2 14:32:41 2011 : Using interface ppp0 Wed Feb 2 14:32:41 2011 : Connect: ppp0 <--> socket[34:17] Wed Feb 2 14:32:41 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:44 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:47 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:50 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:53 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:56 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:32:59 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:33:02 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:33:05 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:33:08 2011 : sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x543c7af8> <pcomp> <accomp>] Wed Feb 2 14:33:11 2011 : LCP: timeout sending Config-Requests Wed Feb 2 14:33:11 2011 : Connection terminated. Wed Feb 2 14:33:11 2011 : PPTP disconnecting... Wed Feb 2 14:33:11 2011 : PPTP disconnected Others can get to the VPN and I can too, but not on my network. The only clue I have seen in other forums is to set the NAT default host on the Time Capsule -- I set this to the IP that my mac got over DHCP. I made sure that my Mac gets a different range of IP addresses that it would get if it connected to the VPN (192.168.1.x vs. 10.0.0.x). Not using any VPN client -- just Network System Preferences. It has worked in the past -- but it was a while ago, so I can't pinpoint a change. My sysadmin doesn't even see incoming connections to the VPN (nothing logged about me when I connect). Looking for any diagnostic advice at all

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >