Search Results

Search found 2872 results on 115 pages for 'packet injection'.

Page 71/115 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Facing issues in setting up VPN connection(IKEv1) using iphone (Defult Cisco VPN client) and Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using iPhone (Defult Cisco VPN client) and Strongswan 4.5.0 server. The Strongswan server is running on Ubuntu Linux, which is connected to some wifi hotspot. This is the guide which was used. I generated CA, server and client certificate, with the only difference mentioned below. “While generating server certificate, as per link CN=vpn.strongswan.org instead of this I changed CN name to CN=192.168.43.212.” Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on iphone. After installation I notice that certificates are considered as trusted also. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212 Iphone eth0 interface ip address: 192.168.43.72. iphone is also attached with the same wifi hotspot. Below is the snapshot of client configurations. Description Strong swan Server 192.168.43.212 Account ipsecvpn Password ***** Use certificate ON Certificate client The above username and password are in sync with the ipsec.secrets file. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.72 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add With the above configurations when I enable VPN on iphone, it says Could not able to verify server certificate. I ran Wireshark on a Linux server and observe that initially some ISAKMP message exchanges happens between client and server, which are successful but before authorization, client is sending some informational message and soon after this client is showing error as popup Could not able to verify server certificate. Capture logs on Strongswan server and in server logs below errors are observed: From auth.log Apr 25 20:16:08 Linux pluto[4025]: | ISAKMP version: ISAKMP Version 1.0 Apr 25 20:16:08 Linux pluto[4025]: | exchange type: ISAKMP_XCHG_INFO Apr 25 20:16:08 Linux pluto[4025]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 25 20:16:08 Linux pluto[4025]: | message ID: 9d 1a ea 4d Apr 25 20:16:08 Linux pluto[4025]: | length: 76 Apr 25 20:16:08 Linux pluto[4025]: | ICOOKIE: f6 b7 06 b2 b1 84 5b 93 Apr 25 20:16:08 Linux pluto[4025]: | RCOOKIE: 86 92 a0 c2 a6 2f ac be Apr 25 20:16:08 Linux pluto[4025]: | peer: c0 a8 2b 48 Apr 25 20:16:08 Linux pluto[4025]: | state hash entry 8 Apr 25 20:16:08 Linux pluto[4025]: | state object not found Apr 25 20:16:08 Linux pluto[4025]: **packet from 192.168.43.72:500: Informational Exchange is for an unknown (expired?) SA** Apr 25 20:16:08 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 8 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | *time to handle event Apr 25 20:16:16 Linux pluto[4025]: | event after this is EVENT_RETRANSMIT in 2 seconds Apr 25 20:16:16 Linux pluto[4025]: | handling event EVENT_RETRANSMIT for 192.168.43.72 "ios1" #8 Apr 25 20:16:16 Linux pluto[4025]: | sending 76 bytes for EVENT_RETRANSMIT through wlan0 to 192.168.43.72:500: Apr 25 20:16:16 Linux pluto[4025]: | a6 a5 86 41 4b fb ff 99 c9 18 34 61 01 7b f1 d9 Apr 25 20:16:16 Linux pluto[4025]: | 08 10 06 01 e9 1c ea 60 00 00 00 4c ba 7d c8 08 Apr 25 20:16:16 Linux pluto[4025]: | 13 47 95 18 19 31 45 30 2e 22 f9 4d 85 2c 27 bc Apr 25 20:16:16 Linux pluto[4025]: | 9e 9b e1 ae 1e 35 51 6f ab 80 f5 73 3c 15 8d 20 Apr 25 20:16:16 Linux pluto[4025]: | 4b 46 47 86 50 24 3f 13 15 7d d5 17 Apr 25 20:16:16 Linux pluto[4025]: | inserting event EVENT_RETRANSMIT, timeout in 40 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 2 seconds for #10 Apr 25 20:16:16 Linux pluto[4025]: | rejected packet: Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | control: Apr 25 20:16:16 Linux pluto[4025]: | 30 00 00 00 00 00 00 00 00 00 00 00 0b 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 6f 00 00 00 02 03 03 00 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 02 00 00 00 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | name: Apr 25 20:16:16 Linux pluto[4025]: | 02 00 01 f4 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: **ERROR: asynchronous network error report on wlan0 for message to 192.168.43.72 port 500, complainant 192.168.43.72: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]** Anybody please provide some update about this error and how to solve this issue.

    Read the article

  • Server currently under DDOS, not sure what to do

    - by Volex
    My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 > my.host.com.http: S 999692166:999692166(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 > my.host.com.http: S 605369055:605369055(0) win 65535 <mss 1440,nop,nop,sackOK> 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 > my.host.com.http: S 2813379182:2813379182(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 > my.host.com.http: S 281542700:281542700(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 > my.host.com.http: S 499413892:499413892(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.588680 IP 170.51.229.112.2569 > my.host.com.http: S 2195084898:2195084898(0) win 65535 <mss 1460,nop,nop,sackOK> 17:11:08.588773 IP gw2-1.211.ru.3180 > my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 > my.host.com.http: S 2813715032:2813715032(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.591212 IP 203.82.82.54.15848 > my.host.com.http: S 4070423507:4070423507(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591254 IP 203.82.82.54.2545 > my.host.com.http: S 1790910784:1790910784(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591289 IP 203.82.82.54.28306 > my.host.com.http: S 578615626:578615626(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591591 IP gw2-1.211.ru.3191 > my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 > my.host.com.http: S 2813659017:2813659017(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.593691 IP gw2-1.211.ru.3203 > my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • TCP stops sending weirdly.

    - by Utoah
    In case to find out the cause of TCP retransmits on my Linux (RHEL, kernel 2.6.18) servers connecting to the same switch. I had a client-server pair send "Hello" to each other every 200us and captured the packets with tcpdump on the client machine. The command I used to mimic client and server are: while [ 0 ]; do echo "Hello"; usleep 200; done | nc server 18510 while [ 0 ]; do echo "Hello"; usleep 200; done | nc -l 18510 When the server machine was busy serving some other requests, the client suffered from abrupt retransmits occasionally. But the output of tcpdump seemed irrational. 16:04:58.898970 IP server.18510 > client.34533: P 4531:4537(6) ack 3204 win 123 <nop,nop,timestamp 1923778643 3452833828> 16:04:58.901797 IP client.34533 > server.18510: P 3204:3210(6) ack 4537 win 33 <nop,nop,timestamp 3452833831 1923778643> 16:04:58.901855 IP server.18510 > client.34533: P 4537:4549(12) ack 3210 win 123 <nop,nop,timestamp 1923778646 3452833831> 16:04:58.903871 IP client.34533 > server.18510: P 3210:3216(6) ack 4549 win 33 <nop,nop,timestamp 3452833833 1923778646> 16:04:58.903950 IP server.18510 > client.34533: P 4549:4555(6) ack 3216 win 123 <nop,nop,timestamp 1923778648 3452833833> 16:04:58.905796 IP client.34533 > server.18510: P 3216:3222(6) ack 4555 win 33 <nop,nop,timestamp 3452833835 1923778648> 16:04:58.905860 IP server.18510 > client.34533: P 4555:4561(6) ack 3222 win 123 <nop,nop,timestamp 1923778650 3452833835> 16:04:58.908903 IP client.34533 > server.18510: P 3222:3228(6) ack 4561 win 33 <nop,nop,timestamp 3452833838 1923778650> 16:04:58.908966 IP server.18510 > client.34533: P 4561:4567(6) ack 3228 win 123 <nop,nop,timestamp 1923778653 3452833838> 16:04:58.911855 IP client.34533 > server.18510: P 3228:3234(6) ack 4567 win 33 <nop,nop,timestamp 3452833841 1923778653> 16:04:59.112573 IP client.34533 > server.18510: P 3228:3234(6) ack 4567 win 33 <nop,nop,timestamp 3452834042 1923778653> 16:04:59.112648 IP server.18510 > client.34533: P 4567:5161(594) ack 3234 win 123 <nop,nop,timestamp 1923778857 3452834042> 16:04:59.112659 IP client.34533 > server.18510: P 3234:3672(438) ack 5161 win 35 <nop,nop,timestamp 3452834042 1923778857> 16:04:59.114427 IP server.18510 > client.34533: P 5161:5167(6) ack 3672 win 126 <nop,nop,timestamp 1923778858 3452834042> 16:04:59.114439 IP client.34533 > server.18510: P 3672:3678(6) ack 5167 win 35 <nop,nop,timestamp 3452834044 1923778858> 16:04:59.116435 IP server.18510 > client.34533: P 5167:5173(6) ack 3678 win 126 <nop,nop,timestamp 1923778860 3452834044> 16:04:59.116444 IP client.34533 > server.18510: P 3678:3684(6) ack 5173 win 35 <nop,nop,timestamp 3452834046 1923778860> Packet 3228:3234(6) from client was retransmitted due to ack timeout. What I could not understand was that the client machine did not send out any packets after the first 3228:3234(6) packets was sent. The server machine had advertised a window (scaled) large enough. The data transfer up to the retransmit was fine which meant no slow start should be in action. What can cause the client machine to stop sending until the packet timed out? BTW, I am unable to run tcpdump on the server machine.

    Read the article

  • Adding the New HTML Editor Extender to a Web Forms Application using NuGet

    - by Stephen Walther
    The July 2011 release of the Ajax Control Toolkit includes a new, lightweight, HTML5 compatible HTML Editor extender. In this blog entry, I explain how you can take advantage of NuGet to quickly add the new HTML Editor control extender to a new or existing ASP.NET Web Forms application. Installing the Latest Version of the Ajax Control Toolkit with NuGet NuGet is a package manager. It enables you to quickly install new software directly from within Visual Studio 2010. You can use NuGet to install additional software when building any type of .NET application including ASP.NET Web Forms and ASP.NET MVC applications. If you have not already installed NuGet then you can install NuGet by navigating to the following address and clicking the giant install button: http://nuget.org/ After you install NuGet, you can add the Ajax Control Toolkit to a new or existing ASP.NET Web Forms application by selecting the Visual Studio menu option Tools, Library Package Manager, Package Manager Console: Selecting this menu option opens the Package Manager Console. You can enter the command Install-Package AjaxControlToolkit in the console to install the Ajax Control Toolkit: After you install the Ajax Control Toolkit with NuGet, your application will include an assembly reference to the AjaxControlToolkit.dll and SanitizerProviders.dll assemblies: Furthermore, your Web.config file will be updated to contain a new tag prefix for the Ajax Control Toolkit controls: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> The configuration file installed by NuGet adds the prefix ajaxToolkit for all of the Ajax Control Toolkit controls. You can type ajaxToolkit: in source view to get auto-complete in Source view. You can, of course, change this prefix to anything you want. Using the HTML Editor Extender After you install the Ajax Control Toolkit, you can use the HTML Editor Extender with the standard ASP.NET TextBox control to enable users to enter rich formatting such as bold, underline, italic, different fonts, and different background and foreground colors. For example, the following page can be used for entering comments. The page contains a standard ASP.NET TextBox, Button, and Label control. When you click the button, any text entered into the TextBox is displayed in the Label control. It is a pretty boring page: Let’s make this page fancier by extending the standard ASP.NET TextBox with the HTML Editor extender control: Notice that the ASP.NET TextBox now has a toolbar which includes buttons for performing various kinds of formatting. For example, you can change the size and font used for the text. You also can change the foreground and background color – and make many other formatting changes. You can customize the toolbar buttons which the HTML Editor extender displays. To learn how to customize the toolbar, see the HTML Editor Extender sample page here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx Here’s the source code for the ASP.NET page: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Add Comments</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="TSM1" runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="8" Runat="server" /> <ajaxToolkit:HtmlEditorExtender ID="hee" TargetControlID="txtComments" Runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Add Comment" Runat="server" onclick="btnSubmit_Click" /> <hr /> <asp:Label ID="lblComment" Runat="server" /> </div> </form> </body> </html> Notice that the page above contains 5 controls. The page contains a standard ASP.NET TextBox, Button, and Label control. However, the page also contains an Ajax Control Toolkit ToolkitScriptManager control and HtmlEditorExtender control. The HTML Editor extender control extends the standard ASP.NET TextBox control. The HTML Editor TargetID attribute points at the TextBox control. Here’s the code-behind for the page above:   using System; namespace WebApplication1 { public partial class Default : System.Web.UI.Page { protected void btnSubmit_Click(object sender, EventArgs e) { lblComment.Text = txtComments.Text; } } }   Preventing XSS/JavaScript Injection Attacks If you use an HTML Editor -- any HTML Editor -- in a public facing web page then you are opening your website up to Cross-Site Scripting (XSS) attacks. An evil hacker could submit HTML using the HTML Editor which contains JavaScript that steals private information such as other user’s passwords. Imagine, for example, that you create a web page which enables your customers to post comments about your website. Furthermore, imagine that you decide to redisplay the comments so every user can see them. In that case, a malicious user could submit JavaScript which displays a dialog asking for a user name and password. When an unsuspecting customer enters their secret password, the script could transfer the password to the hacker’s website. So how do you accept HTML content without opening your website up to JavaScript injection attacks? The Ajax Control Toolkit HTML Editor supports the Anti-XSS library. You can use the Anti-XSS library to sanitize any HTML content. The Anti-XSS library, for example, strips away all JavaScript automatically. You can download the Anti-XSS library from NuGet. Open the Package Manager Console and execute the command Install-Package AntiXSS: Adding the Anti-XSS library to your application adds two assemblies to your application named AntiXssLibrary.dll and HtmlSanitizationLibrary.dll. After you install the Anti-XSS library, you can configure the HTML Editor extender to use the Anti-XSS library your application’s web.config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> Summary In this blog entry, I described how you can quickly get started using the new HTML Editor extender – included with the July 2011 release of the Ajax Control Toolkit – by installing the Ajax Control Toolkit with NuGet. If you want to learn more about the HTML Editor then please take a look at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • PPTP connection disconnect

    - by Vladimir Franciz S. Blando
    My pptp connection wont stay connected, it will disconnect in less than a minute here are some relevant log entries May 31 13:32:31 localhost NetworkManager[931]: <info> Starting VPN service 'pptp'... May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 15216 May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' appeared; activating connections May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: init (1) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: starting (3) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (Connect) reply received. May 31 13:32:31 localhost pppd[15221]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. May 31 13:32:31 localhost pppd[15221]: pppd 2.4.5 started by root, uid 0 May 31 13:32:31 localhost pptp[15224]: nm-pptp-service-15216 log[main:pptp.c:314]: The synchronous pptp option is NOT activated May 31 13:32:31 localhost pppd[15221]: Using interface ppp0 May 31 13:32:31 localhost pppd[15221]: Connect: ppp0 <--> /dev/pts/5 May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. May 31 13:32:33 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1536). May 31 13:32:37 localhost pppd[15221]: CHAP authentication succeeded May 31 13:32:37 localhost kernel: [54007.078553] PPP MPPE Compression module registered May 31 13:32:40 localhost pppd[15221]: MPPE 128-bit stateless compression enabled May 31 13:32:42 localhost pppd[15221]: local IP address 10.100.0.52 May 31 13:32:42 localhost pppd[15221]: remote IP address 10.100.0.1 May 31 13:32:42 localhost pppd[15221]: primary DNS address 4.2.2.1 May 31 13:32:42 localhost pppd[15221]: secondary DNS address 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) reply received. May 31 13:32:42 localhost NetworkManager[931]: <info> VPN Gateway: 103.28.219.2 May 31 13:32:42 localhost NetworkManager[931]: <info> Tunnel Device: ppp0 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Address: 10.100.0.52 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Prefix: 32 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Point-to-Point Address: 10.100.0.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Maximum Segment Size (MSS): 0 May 31 13:32:42 localhost NetworkManager[931]: <info> Forbid Default Route: no May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 4.2.2.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> DNS Domain: '(none)' May 31 13:32:43 localhost dnsmasq[2127]: exiting on receipt of SIGTERM May 31 13:32:43 localhost NetworkManager[931]: <info> DNS: starting dnsmasq... May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dnsmasq[15290]: error at line 2 of /var/run/nm-dns-dnsmasq.conf May 31 13:32:43 localhost dnsmasq[15290]: FAILED to start up May 31 13:32:43 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) complete. May 31 13:32:43 localhost NetworkManager[931]: <info> Policy set 'Dynalabs' (ppp0) as default for IPv4 routing and DNS. May 31 13:32:43 localhost NetworkManager[931]: <info> VPN plugin state changed: started (4) May 31 13:32:43 localhost NetworkManager[931]: <warn> dnsmasq exited with error: Configuration problem (1) May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dbus[872]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) May 31 13:32:43 localhost dbus[872]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' May 31 13:33:00 localhost ntpdate[15370]: step time server 91.189.94.4 offset -1.110301 sec May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd6d6 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x93aa May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xcc83 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2031 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x13d4 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x5b11 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x414b May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f5f May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe9ff May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8e20 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8f0 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf166 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x36e6 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xdd19 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xda26 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xac5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x53a5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x507e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1dc5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf87b May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f27 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd10c May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66ef May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa294 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb15 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x52a2 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd863 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8a96 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xde19 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9763 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb23 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x83ca May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x964e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe8ae May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf614 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9b1 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf086 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbff4 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66c5 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe42 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf295 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x86fe May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3bc1 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbaad May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x88b5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd7a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x30d5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2d8f May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3933 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8d42 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x4b4 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa205 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x7cc5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1b6a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf004 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x21b6 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x51eb

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Copied Netbeans IDE Maven Web project failes to compile

    - by Quaternion
    If I am going to make serious changes to my project I like to make a copy (right click project and select copy). I have created copies of my projects many times (maven web applications included) without issue. Now all I do is copy the project making no changes, leaving the copy settings on default (project name etc) and then try to run it and spring decides to take issue with it. Again I try running the original and it works. I deleted the faulty copy, restarted and tried again and still it does not work. Here are my maven/system details: Apache Maven 2.2.1 (rdebian-4) Java version: 1.6.0_20 Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_CA, platform encoding: UTF-8 OS name: "linux" version: "2.6.35-23-generic" arch: "i386" Family: "unix" Here is the exception: INFO: 2010-12-21 16:08:23,715 ERROR org.springframework.web.context.ContextLoader.initWebApplicationContext:215 - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:324) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:998) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4664) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5266) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:883) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:682) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:614) at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:450) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:223) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:202) at org.springframework.beans.factory.BeanFactoryUtils.beanNamesForTypeIncludingAncestors(BeanFactoryUtils.java:143) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findDefaultEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:503) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:473) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.resolveEntityManager(PersistenceAnnotationBeanPostProcessor.java:599) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.getResourceToInject(PersistenceAnnotationBeanPostProcessor.java:570) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:192) at org.springframework.beans.factory.annotation.InjectionMetadata.injectMethods(InjectionMetadata.java:117) INFO: at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:321) ... 57 more Caused by: java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2406) at java.lang.Class.getConstructor0(Class.java:2716) at java.lang.Class.getDeclaredConstructor(Class.java:2002) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:54) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) ... 71 more Caused by: java.lang.ClassNotFoundException: org.springframework.jdbc.datasource.lookup.DataSourceLookup at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.glassfish.web.loader.WebappClassLoader.findClass(WebappClassLoader.java:959) at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1430) ... 77 more SEVERE: PWC1306: Startup of context /quickstart_upgrade-0.1-SNAPSHOT failed due to previous errors SEVERE: PWC1305: Exception during cleanup after start failed org.apache.catalina.LifecycleException: PWC2769: Manager has not yet been started at org.apache.catalina.session.StandardManager.stop(StandardManager.java:892) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:5456) at com.sun.enterprise.web.WebModule.stop(WebModule.java:530) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5284) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.apache.catalina.core.StandardContext.start(StandardContext.java:5289) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:324) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:998) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4664) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5266) ... 38 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:883) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:682) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:614) at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:450) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:223) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:202) at org.springframework.beans.factory.BeanFactoryUtils.beanNamesForTypeIncludingAncestors(BeanFactoryUtils.java:143) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findDefaultEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:503) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:473) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.resolveEntityManager(PersistenceAnnotationBeanPostProcessor.java:599) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.getResourceToInject(PersistenceAnnotationBeanPostProcessor.java:570) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:192) at org.springframework.beans.factory.annotation.InjectionMetadata.injectMethods(InjectionMetadata.java:117) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:321) ... 57 more Caused by: java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2406) at java.lang.Class.getConstructor0(Class.java:2716) at java.lang.Class.getDeclaredConstructor(Class.java:2002) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:54) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) ... 71 more Caused by: java.lang.ClassNotFoundException: org.springframework.jdbc.datasource.lookup.DataSourceLookup at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.glassfish.web.loader.WebappClassLoader.findClass(WebappClassLoader.java:959) at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1430) ... 77 more

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Sharepoint 2007: author.dll status code?

    - by CrazyNick
    Is there a way to find any info using /_vti_bin/_vti_aut /author.dll status code? <html><head><title>vermeer RPC packet</title></head> <body> <p>method= <p>status= <ul> <li>status=393226 <li>osstatus=0 <li>msg=The form submission cannot be processed because it exceeded the maximum length allowed by the Web administrator. Please resubmit the form with less data. <li>osmsg= </ul> </body> </html>

    Read the article

  • MS Windows Server 2008R2 slow file copy, slow network connection

    - by MattrixHax
    i just setup a windows 2008R2 standard server, with the only installed app being Hyper-V, and only 1 windows XP VM is running. Whenever i try to copy a file from my windows 7 laptop over to the 2008R2 server machine's admin shares ( \\servername\c$ ) the files start transferring around 60mb/s and then drop to around 5mb/s. My windows 7 machine and the server 2008 machine are both in WORKGROUP (no domain here). when i try the same transfer to our server 2003 box the transfer speeds are fine. tried disabling autotuning (netsh interface tcp set global autotuninglevel=disabled) as well as turning off the checksum offload to the adapter (tx and rx) - i still see strange packet errors (bad header checksum) using wireshark and just cannot seem to track down what the issue is - over 1 hour to transfer 4gb of files from 1 server to another that are on the same GB switch is just crazy.... any ideas would be greatly appreciated!

    Read the article

  • Windows 2008 R2 DHCP server not responding to DHCP discover

    - by MartinSteel
    I've got two Windows 2008 Enterprise R2 servers both running DNS and DHCP called cod & lobster. DHCP is setup using the split scope option introduced with 2008 R2, whereby both servers should respond with the first response providing the lease. Setup is as follows: Cod - IP: 192.168.0.231 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 160-179. - Response Delay: 0ms - Authorised in Active Directory (Re-authorised to confirm) - Windows firewall disabled while testing Lobster - IP: 192.168.0.232 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 101-159. - Response Delay: 1000ms - Authorised in Active Directory All DHCP leases to clients are currently being issues by Lobster rather than Cod. Packet captures with Wireshark show the following (all to broadcast address): Client - DHCP Discover Lobster - DHCP Offer (after 1s delay) Client - DHCP Request Lobster - DHCP Ack Client - DHCP Inform From my setup with two servers I'd expect to see a DHCP Offer coming from Cod almost immediately after the DHCP Discover. Does anybody have any idea what would prevent the DHCP Server responding to the discover?

    Read the article

  • How to make iPhone Cisco VPN client work with ASA with certificate authentication

    - by Ben Jencks
    I have an ASA that's providing IPsec VPN services using certificate authentication (no xauth, just the certs). It works perfectly with the Cisco IPsec VPN Client. Now I'm trying to let iPhones connect. I've installed the CA cert and a client certificate on the iPhone with a profile using iPCU, along with the VPN configuration. Then connecting gives the error "Could not validate the server certificate". Additionally, the ASA logs the error "Received encrypted Oakley Informational packet with invalid payloads". FWIW, I receive the same invalid payload error when trying to use the Snow Leopard IPsec client to connect. Has anyone successfully gotten the iPhone IPsec client to work with certificate auth?

    Read the article

  • Junos custom-attack signature pattern syntax

    - by James Hawkwind
    I am stuck at a point with the configuration of a custom-attack signature in Junos. According to the Junos Custom Attack Definition documentation page, I can set up a custom attack based upon a signature in the packet. In the documentation you can specify a "pattern" to match, but it fails to describe what the pattern syntax should be. Particularly, I want to match the HEX values of 8C 00 13 00 in the first four bytes of the TCP data payload. Does anyone know how to accomplish this correctly?

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Radius Authorization against ActiveDirectory and the users file

    - by mohrphium
    I have a problem with my freeradius server configuration. I want to be able to authenticate users against Windows ActiveDirectory (2008 R2) and the users file, because some of my co-workers are not listed in AD. We use the freeradius server to authenticate WLAN users. (PEAP/MSCHAPv2) AD Authentication works great, but I still have problems with the /etc/freeradius/users file When I run freeradius -X -x I get the following: Mon Jul 2 09:15:58 2012 : Info: ++++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 1 length 13 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: +++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: ++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/default Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] EAP Identity Mon Jul 2 09:15:58 2012 : Info: [eap] processing type tls Mon Jul 2 09:15:58 2012 : Info: [tls] Initiate Mon Jul 2 09:15:58 2012 : Info: [tls] Start returned 1 Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns handled Sending Access-Challenge of id 199 to 192.168.61.11 port 3072 EAP-Message = 0x010200061920 Message-Authenticator = 0x00000000000000000000000000000000 State = 0x85469e2a854487589fb1196910cb8ae3 Mon Jul 2 09:15:58 2012 : Info: Finished request 125. Mon Jul 2 09:15:58 2012 : Debug: Going to the next request Mon Jul 2 09:15:58 2012 : Debug: Waking up in 2.4 seconds. After that it repeats the login attempt and at some point tries to authenticate against ActiveDirectory with ntlm, which doesn't work since the user exists only in the users file. Can someone help me out here? Thanks. PS: Hope this helps, freeradius trying to auth against AD: Mon Jul 2 09:15:58 2012 : Info: ++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[control] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 7 length 67 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[smbpasswd] returns notfound Mon Jul 2 09:15:58 2012 : Info: ++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] Request found, released from the list Mon Jul 2 09:15:58 2012 : Info: [eap] EAP/mschapv2 Mon Jul 2 09:15:58 2012 : Info: [eap] processing type mschapv2 Mon Jul 2 09:15:58 2012 : Info: [mschapv2] # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: [mschapv2] +- entering group MS-CHAP {...} Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] Told to do MS-CHAPv2 for testtest with NT-Password Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --username=%{mschap:User-Name:-None} -> --username=testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] No NT-Domain was found in the User-Name. Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: %{mschap:NT-Domain} -> Mon Jul 2 09:15:58 2012 : Info: [mschap] ... expanding second conditional Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --domain=%{%{mschap:NT-Domain}:-AD.CXO.NAME} -> --domain=AD.CXO.NAME Mon Jul 2 09:15:58 2012 : Info: [mschap] mschap2: 82 Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --challenge=%{mschap:Challenge:-00} -> --challenge=dd441972f987d68b Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --nt-response=%{mschap:NT-Response:-00} -> --nt-response=7e6c537cd5c26093789cf7831715d378e16ea3e6c5b1f579 Mon Jul 2 09:15:58 2012 : Debug: Exec-Program output: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program-Wait: plaintext: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program: returned: 1 Mon Jul 2 09:15:58 2012 : Info: [mschap] External script failed. Mon Jul 2 09:15:58 2012 : Info: [mschap] FAILED: MS-CHAP2-Response is incorrect Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns reject Mon Jul 2 09:15:58 2012 : Info: [eap] Freeing handler Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns reject Mon Jul 2 09:15:58 2012 : Info: Failed to authenticate the user. Mon Jul 2 09:15:58 2012 : Auth: Login incorrect (mschap: External script says Logon failure (0xc000006d)): [testtest] (from client techap01 port 0 via TLS tunnel) PPS: Maybe the problem is located here: In /etc/freeradius/modules/ntlm_auth I have set ntlm to: program = "/usr/bin/ntlm_auth --request-nt-key --domain=AD.CXO.NAME --username=%{mschap:User-Name} --password=%{User-Password}" I need this, so users can login without adding @ad.cxo.name to their usernames. But how can I tell freeradius to try both logins, [email protected] (should fail) testtest (against users file - should work)

    Read the article

  • How to enable "Sleep" for Windows 2012 server

    - by Ramesh
    I recently installed Windows Server 2012. This will serve as the dev-instance for our engineers and will be accessed in multiple time zones. I therefore plan to run this 24x7 but want to conserve energy by going to sleep mode when not in use & enable Wake-On-Magic-Packet. Based on the previous post it appears that there is no option to sleep Win Server 2012 with Hyper-V. Since I don't care much about virtualization now, I uninstalled Hyper-V. In addition to this, I have done the following 1) Ran powercfg -a. Don't find sleep. 2) Added [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\hvboot] "Start"=dword:00000003 to try gaining sleep - no luck! 3) Only see Shutdown and Restart in power options. Please help me sleep and save the planet :-) Thanks, Ramesh

    Read the article

  • My IIS server won't serve SSL sites to some browsers

    - by sbleon
    (Update: This is now cross-posted at http://stackoverflow.com/questions/3355000. This is the more appropriate forum, but StackOverflow gets a lot more traffic.) I've got an IIS 6.0 server that won't serve pages over SSL to some browsers. In Webkit-based browsers on OS X 10.6, I can't load pages at all. In MSIE 8 on Windows XP SP3, I can load pages, but it will sometimes hang downloading images or sending POSTs. Working: Firefox 3.6 (OS X + Windows) Chrome (Windows) Partially Working: MSIE 8 (works sometimes, but hangs up, especially on POSTs) Not Working: Chrome 5 (OS X) Safari 5 (OS X) Mobile Safari (iOS 4) On OS X (the easiest platform for me to test on), Chrome and Firefox both negotiate the same TLS Cipher, but Chrome hangs on or after the post-negotiation handshake. Chrome packet capture (via ssldump): 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 cipher suites Unknown value 0xc00a Unknown value 0xc009 Unknown value 0xc007 Unknown value 0xc008 Unknown value 0xc013 Unknown value 0xc014 Unknown value 0xc011 Unknown value 0xc012 Unknown value 0xc004 Unknown value 0xc005 Unknown value 0xc002 Unknown value 0xc003 Unknown value 0xc00e Unknown value 0xc00f Unknown value 0xc00c Unknown value 0xc00d Unknown value 0x2f TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Unknown value 0x35 TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x32 Unknown value 0x33 Unknown value 0x38 Unknown value 0x39 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.3106 (0.2620) S>C Handshake ServerHello Version 3.1 session_id[32]= bb 0e 00 00 7a 7e 07 50 5e 78 48 cf 43 5a f7 4d d2 ed 72 8f ff 1d 9e 74 66 74 03 b3 bb 92 8d eb cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL Certificate ServerHelloDone 1 3 0.3196 (0.0090) C>S Handshake ClientKeyExchange 1 4 0.3197 (0.0000) C>S ChangeCipherSpec 1 5 0.3197 (0.0000) C>S Handshake [hang, no more data transmitted] Firefox packet capture: 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 resume [32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipher suites Unknown value 0xff Unknown value 0xc00a Unknown value 0xc014 Unknown value 0x88 Unknown value 0x87 Unknown value 0x39 Unknown value 0x38 Unknown value 0xc00f Unknown value 0xc005 Unknown value 0x84 Unknown value 0x35 Unknown value 0xc007 Unknown value 0xc009 Unknown value 0xc011 Unknown value 0xc013 Unknown value 0x45 Unknown value 0x44 Unknown value 0x33 Unknown value 0x32 Unknown value 0xc00c Unknown value 0xc00e Unknown value 0xc002 Unknown value 0xc004 Unknown value 0x96 Unknown value 0x41 TLS_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_RC4_128_SHA Unknown value 0x2f Unknown value 0xc008 Unknown value 0xc012 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA Unknown value 0xc00d Unknown value 0xc003 Unknown value 0xfeff TLS_RSA_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.0983 (0.0497) S>C Handshake ServerHello Version 3.1 session_id[32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL 1 3 0.0983 (0.0000) S>C ChangeCipherSpec 1 4 0.0983 (0.0000) S>C Handshake 1 5 0.1019 (0.0035) C>S ChangeCipherSpec 1 6 0.1019 (0.0000) C>S Handshake 1 7 0.1019 (0.0000) C>S application_data 1 8 0.2460 (0.1440) S>C application_data 1 9 0.3108 (0.0648) S>C application_data 1 10 0.3650 (0.0542) S>C application_data 1 11 0.4188 (0.0537) S>C application_data 1 12 0.4580 (0.0392) S>C application_data 1 13 0.4831 (0.0251) S>C application_data [etc] Update: Here's a Wireshark capture from the server end. What's going on with those two much-delayed RST packets? Is that just IIS terminating what it perceives as a non-responsive connection? 19 10.129450 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=3 TSV=699250189 TSER=0 20 10.129517 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 WS=0 TSV=0 TSER=0 21 10.168596 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSV=699250189 TSER=0 22 10.172950 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Hello 23 10.173267 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 24 10.173297 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 25 10.385180 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=2897 Win=524280 Len=0 TSV=699250191 TSER=163006 26 10.385235 10.100.xxx.xx 67.249.xxx.xxx TLSv1 Server Hello, Certificate, Server Hello Done 27 10.424682 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=4215 Win=524280 Len=0 TSV=699250192 TSER=163008 28 10.435245 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Key Exchange 29 10.438522 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Change Cipher Spec 30 10.438553 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=421 Win=65115 Len=0 TSV=163008 TSER=699250192 31 10.449036 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Encrypted Handshake Message 32 10.580652 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=458 Win=65078 Len=0 TSV=163010 TSER=699250192 7312 57.315338 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50644 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0 19531 142.316425 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [RST, ACK] Seq=4215 Ack=458 Win=0 Len=0

    Read the article

  • How to troubleshoot a remote wmi query/access failure?

    - by Roman
    Hi I'm using Powershell to query a remote computer in a domain for a wmi object, eg: "gwmi -computer test -class win32_bios". I get this error message: Value does not fall within the expected range Executing the query local under the same user works fine. It seems to happen on both windows 2003 and also 2008 systems. The user that runs the shell has admin rights on the local and remote server. I checked wmi and dcom permissions as far as I know how to do this, they seem to be the same on a server where it works, and another where it does not. I think it is not a network issue, all ports are open that are needed, and it also happens within the same subnet. When sniffing the traffic we see the following errors: RPC: c/o Alter Cont Resp: Call=0x2 Assoc Grp=0x4E4E Xmit=0x16D0 Recv=0x16D0 Warning: GssAPIMechanism is not found, either caused by not reassembled, conversation off or filtering. And an errormessage from Kerberos: Kerberos: KRB_ERROR - KDC_ERR_BADOPTION (13) The option code in the packet is 0x40830000 Any idea what I should look into?

    Read the article

  • pcap stream rotation and pruning

    - by pilcrow
    Some of my servers collect a lot of packet data. Is there a utility (or patch to tcpdump(1)) to log a pcap stream to disk which: Rotates based on size of data written Prunes written files, keeping only the N most recent Does not re-use output filenames Is self-contained (Ruling out, e.g., a rotation with external pruning via crond(8)+tmpwatch(8)) Basically I want a multilog or svlogd that groks the pcap record format. The -W filecount option of tcpdump-4.0.0 "prunes" by recycling old filenames, which violates #3 above, forcing me to consult mtimes to determine recency and providing no guarantees against surprise truncation of the log file. The -G option introduces strftime(2)-specifier support in output filenames, which would give me at least second-precision in file names, but I can't figure out how to get pruning to work with this scheme.

    Read the article

  • Server Security

    - by mahatmanich
    I want to run my own root server (directly accessible from the web without a hardware firewall) with debian lenny, apache2, php5, mysql, postfix MTA, sftp (based on ssh) and maybe dns server. What measures/software would you recomend, and why, to secure this server down and minimalize the attack vector? Webapplications aside ... This is what I have so far: iptables (for gen. packet filtering) fail2ban (brute force attack defense) ssh (chang default, port disable root access) modsecurity - is really clumsy and a pain (any alternative here?) ?Sudo why should I use it? what is the advantage to normal user handling thinking about greensql for mysql www.greensql.net is tripwire worth looking at? snort? What am I missing? What is hot and what is not? Best practices? I like "KISS" - Keep it simple secure, I know it would be nice! Thanks in advance ...

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >