Search Results

Search found 1855 results on 75 pages for 'tango 123'.

Page 33/75 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • Ubuntu 12.10 live-usb won't boot

    - by user109175
    I own an Aleutia "Tango" low power PC. It uses a Intel Atom CPU N2800 @1.86GHz processor. I know there are Linux issues with the "Cedar Trail" graphics on this hardware but I can happily run Ubuntu 12.04 using the VESA graphics driver. I wanted to try out Ubuntu 12.10 so I created a live USB under 12.04 but I am unable to get this to boot. It gets as far as the Grub screen but after that the monitor shuts down. I have tried the F6 "nomodeset" option but that doesn't make any difference. Does anyone have any knowledge of this problem? Thanks in advance.

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Asp.net MVC Route class that supports catch-all parameter anywhere in the URL

    - by Robert Koritnik
    the more I think about it the more I believe it's possible to write a custom route that would consume these URL definitions: {var1}/{var2}/{var3} Const/{var1}/{var2} Const1/{var1}/Const2/{var2} {var1}/{var2}/Const as well as having at most one greedy parameter on any position within any of the upper URLs like {*var1}/{var2}/{var3} {var1}/{*var2}/{var3} {var1}/{var2}/{*var3} There is one important constraint. Routes with greedy segment can't have any optional parts. All of them are mandatory. Example This is an exemplary request http://localhost/Show/Topic/SubTopic/SubSubTopic/123/This-is-an-example This is URL route definition {action}/{*topicTree}/{id}/{title} Algorithm Parsing request route inside GetRouteData() should work like this: Split request into segments: Show Topic SubTopic SubSubTopic 123 This-is-an-example Process route URL definition starting from the left and assigning single segment values to parameters (or matching request segment values to static route constant segments). When route segment is defined as greedy, reverse parsing and go to the last segment. Parse route segments one by one backwards (assigning them request values) until you get to the greedy catch-all one again. When you reach the greedy one again, join all remaining request segments (in original order) and assign them to the greedy catch-all route parameter. Questions As far as I can think of this, it could work. But I would like to know: Has anyone already written this so I don't have to (because there are other aspects to parsing as well that I didn't mention (constraints, defaults etc.) Do you see any flaws in this algorithm, because I'm going to have to write it myself if noone has done it so far. I haven't thought about GetVirtuaPath() method at all.

    Read the article

  • How to create a column containing a string of stars to inidcate levels of a factor in a data frame i

    - by PaulHurleyuk
    (second question today - must be a bad day) I have a dataframe with various columns, inculding a concentration column (numeric), a flag highlighting invalid results (boolean) and a description of the problem (character) dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), rawconc = c(77.4, 52.6, 86.5, 44.5, 167, 16.2, 59.3, 123, 1.95, 181), reason = structure(c(NA, NA, 2L, NA, NA, NA, 2L, 1L, NA, NA), .Label = c("Fails Acceptance Criteria", "Poor Injection"), class = "factor"), flag = c("False", "False", "True", "False", "False", "False", "True", "True", "False", "False" )), .Names = c("x", "rawconc", "reason", "flag"), row.names = c(NA, -10L), class = "data.frame") I can create a column with the numeric level of the reason column df$level<-as.numeric(df$reason) df x rawconc reason flag level 1 1 77.40 <NA> False NA 2 2 52.60 <NA> False NA 3 3 86.50 Poor Injection True 2 4 4 44.50 <NA> False NA 5 5 167.00 <NA> False NA 6 6 16.20 <NA> False NA 7 7 59.30 Poor Injection True 2 8 8 123.00 Fails Acceptance Criteria True 1 9 9 1.95 <NA> False NA 10 10 181.00 <NA> False NA and here's what I want to do to create a column with 'level' many stars, but it fails df$stars<-paste(rep("*",df$level)sep="",collapse="") Error: unexpected symbol in "df$stars<-paste(rep("*",df$level)sep" df$stars<-paste(rep("*",df$level),sep="",collapse="") Error in rep("*", df$level) : invalid 'times' argument rep("*",df$level) Error in rep("*", df$level) : invalid 'times' argument df$stars<-paste(rep("*",pmax(df$level,0,na.rm=TRUE)),sep="",collapse="") Error in rep("*", pmax(df$level, 0, na.rm = TRUE)) : invalid 'times' argument It seems that rep needs to be fed one value at a time. I feel that this should be possible (and my gut says 'use lapply' but my apply fu is v. poor) ANy one want to try ?

    Read the article

  • Passing Key-Value pair to a COM method in C#

    - by Sinnerman
    Hello, I have the following problem: I have a project in C# .Net where I use a third party COM component. So the problem is that the above component has a method, which takes a string and a number of key-value pairs as arguments. I already managed to call that method through JavaScript like that: var srcName srcName = top.cadView.addSource( 'Database', { driver : 'Oracle', host : '10.10.1.123', port : 1234, database : 'someSID', user : 'someuser', password : 'somepass' } ) if ( srcName != '' ) { ... } ... and it worked perfectly well. However I have no idea how to do the same using C#. I tried passing the pairs as Dictionary and Struct/Class but it throws me a "Specified cast is not valid." exception. I also tried using Hashtable like that: Hashtable args = new Hashtable(); args.Add("driver", "Oracle"); args.Add("host", "10.10.1.123"); args.Add("port", 1234); args.Add("database", "someSID"); args.Add("user", "someUser"); args.Add("password", "samePass"); String srcName = axCVCanvas.addSource("Database", args); and although it doesn't throw an exception it still won't do the job, writing me in a log file [ Error] [14:38:33.281] Cad::SourceDB::SourceDB(): missing parameter 'driver' Any help will be appreciated, thanks.

    Read the article

  • How to send parameters with same encoding from javascript?

    - by nimcap
    I have a javascript file that lots of people have embedded to their pages. Since I am hosting the file, I have control over that javascript file; I cannot control the way it is embedded because lots of people is using it already. This javascript file sends GET requests to my servlets, and the parameters passed with the request are recorded to DB. For example, javascript sends a request to http://myserver.com/servlet?p1=123&p2=aString and then servlet records 123 and aString to DB somehow. Before sending strings I use encodeURIComponent() to encode it. But what I figured out is every client sends the same string with different encodings depending on either their browser or the site they are visiting. As a result, same strings are represented with different characters when it reaches servlet (so they are different strings). What I am trying to do is to convert the strings to one kind of encoding from javascript so when they reach the client same words are represented with same characters. How is this possible? PS. If there is a way to convert the encoding from Java it is also applicable.

    Read the article

  • Google App Engine how to get an object from the servlet ?

    - by Frank
    I have the following class objects in Google App Engine's dadastore, I can see them from the "Datastore Viewer " : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info_Entry implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) Long Id; public static final long serialVersionUID=26362862L; String Contact_Id="",First_Name="",Last_Name="",Company_Name="",Branch_Name="",Address_1="",Address_2="",City="",State="",Zip="",Country=""; double D_1,D_2; boolean B_1,B_2; Vector<String> A_Vector=new Vector<String>(); public Contact_Info_Entry() { } ...... } How can my java applications get the object from a servlet url ? For instance if have an instance of Contact_Info_Entry who's Contact_Id is "ABC-123", and my App Id is : nm-java When my java program accesses the url : "http://nm-java.appspot.com/Check_Contact_Info?Contact_Id=ABC-123 How will the Check_Contact_Info servlet get the object from datastore and return it to my app ? public class Check_Contact_Info_Servlet extends HttpServlet { static boolean Debug=true; public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { } ... protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { doGet(request,response); } } Frank

    Read the article

  • Need Help on JavaScript to Trim

    - by MacpaLtd
    I am facing some problems with my server space. The images are using all the space from the server, making it slow. As it is an eCommerce website, it cannot be slow or we lose customers. If I have the following: SKU's : ABC123-001 > catName > Phone ABC753-851 > catName > MAC AT1233-098 > catName > PC How can I use trim to make it the following: SKU's : 123 > catName > Phone 753 > catName > MAC 1233 > catName > PC Which I would use in the following script: <script type="text/javascript"> $(function(){ var sku = $("#ProductBreadcrumb ul li:last").text(); $(".ProductThumbImage img").attr('src','http://img.example.com/images/'+catName+'/'+sku+'.jpg'); }); </script> So, basically, the output for the picture link would be: http://img.example.com/images/phone/123.jpg http://img.example.com/images/mac/753.jpg http://img.example.com/images/pc/1233.jpg So yeah, first problem I have to face is.. How can I trim it? I am not familiar with JavaScript so any help would be really appreciated :D

    Read the article

  • External arrows for jcarousel ?

    - by bob dobbs
    Hi all. I'm creating a slideshow with jcarousel. So far, my slideshow uses most of the defaults for a horizontal slideshow with the tango theme. Can I add an extra "next" and "previous" arrows? I'd like to have them outside of the slideshow container. Is this possible ?

    Read the article

  • Nginx ssl - SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line

    - by Alex
    I am trying to enable ssl on a server using a certificate from 123-reg but I keep getting this error: nginx: [emerg] SSL_CTX_use_certificate_chain_file("/opt/nginx/conf/cleantechlms.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) This is my nginx config: server { listen 443; server_name a-fake-url.com; root /file/path/public; passenger_enabled on; ssl on; ssl_certificate /opt/nginx/conf/cleantechlms.crt; ssl_certificate_key /opt/nginx/conf/cleantechlms.key; } I have tried setting my crt and key to full file permissions but there is no difference. My crt file is the crt I was issued concatenated with the ca crt. Update I have tried copying both the keys in sperate files and then running 'cat mykey.crt ca.cert' Also I tried manually copying the keys into the same file. Any ideas?

    Read the article

  • Resolving “ssl handshake failure” error in PostgresQL

    - by Mitch
    I would like to connect to my Postgres 8.3 database using SSL from my XP client using OpenSSL. This works fine without SSL. When I try it with SSL (no client certificate), I get the error: error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure I have followed the instructions in the Postgres manual for SSL including creating a self-signed certificate. In my pg_hba.conf there is a line: host dbname loginname 123.45.67.89/32 md5 The version of OpenSSL on the server is 0.9.8g and on the client is 0.9.8j. I'd appreciate any suggestions for tracking down the problem. Edit: The uncommented lines from postgresql.conf are: data_directory = '/var/ebs0/postgres/main' hba_file = '/etc/postgresql/8.3/main/pg_hba.conf' ident_file = '/etc/postgresql/8.3/main/pg_ident.conf' external_pid_file = '/var/run/postgresql/8.3-main.pid' listen_addresses = '*' port = 5432 max_connections = 100 unix_socket_directory = '/var/run/postgresql' ssl = true shared_buffers = 24MB

    Read the article

  • Apache Tomcat Server failure

    - by Kenneth Ordona
    I'm trying to set up Apache Tomcat 6 with SSL and once I edited the server.xml file to include the following definitions the server started to fail as soon as I hit startup.bat: <-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -- < Connector protocol="org.apache.coyote.http11.Http11Protocol" port="8445" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.tomcat" keystorePass="pnnlpw" clientAuth="false" sslProtocol="TLS"/ The logs that I have are as follows: Jul 05, 2012 1:52:15 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.7.0_05\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;. Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina start SEVERE: Cannot start server. Server instance is not configured. Does anyone have an idea why this is happening? I believe it has to do with the configuration of my connector. I'm pretty new to this so any help would be much appreciated.

    Read the article

  • Hosting 2 Sites Under 1 DiscountASP.Net Hosting Account

    - by Gav
    I've currently got an ASP.Net site hosted with DiscountASP.Net what I would like to do now is put a second site in a subdirectory of this original site and register a new domain to point at the subfolder, so to the end user its 2 completely seperate sites. I've asked DiscountASP support if they could do this and they just say they dont support it and wont help. If I registered the second domain with someone like 123-reg is it possible to make that domain point to a subdirectory in my main site? If not I may have to put some sort of redirect on my route index page that sends you to the right page based on the URL you requested but I would really rather not have to go there. I don't want to buy an additional account as the second site is just a bit of a side project that may well go no where.

    Read the article

  • migration from jboss4.0.5GA to jboss5.1.0 GA

    - by rahul c
    Hi, Currently we have migrated from jboss4.0.5GA to jboss5.1.0 GA.Our application runs under jboss4.0.5GA well.The reason for migration to enable web service. I cann't find a way to create a web service on jboss 4.0.5 GA. while running on jboss 5.1.0 GA with jbossws-cxf-3.2.1.GA installed on it. i got following error 18:53:43,699 INFO [STDOUT] 2009-11-30 18:53:43,699 INFO org.springframework.core.CollectionFactory - JDK 1.4+ collections available 18:53:43,700 INFO [STDOUT] 2009-11-30 18:53:43,700 INFO org.springframework.core.CollectionFactory - Commons Collections 3.x available 18:53:43,973 INFO [STDOUT] 2009-11-30 18:53:43,972 INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] 18:53:44,308 INFO [STDOUT] 2009-11-30 18:53:44,302 ERROR org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.IllegalArgumentException: Class [org.apache.cxf.transport.http_jetty.spring.NamespaceHandler] does not implement the NamespaceHandler interface Caused by: java.lang.IllegalArgumentException: Class [org.apache.cxf.transport.http_jetty.spring.NamespaceHandler] does not implement the NamespaceHandler interface at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.initHandlerMappings(DefaultNamespaceHandlerResolver.java:119) at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.(DefaultNamespaceHandlerResolver.java:96) at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.(DefaultNamespaceHandlerResolver.java:82) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.createDefaultNamespaceHandlerResolver(XmlBeanDefinitionReader.java:526) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.createReaderContext(XmlBeanDefinitionReader.java:515) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:495) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:340) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:317) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:125) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:141) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:123) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:91) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:94) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:292) at org.springframework.web.context.support.AbstractRefreshableWebApplicationContext.refresh(AbstractRefreshableWebApplicationContext.java:156) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:246) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:184) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:49) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3910) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4393) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext. 18:53:44,313 INFO [STDOUT] install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) 18:53:44,315 ERROR [[/smartLMS]] Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.IllegalArgumentException: Class [org.apache.cxf.transport.http_jetty.spring.NamespaceHandler] does not implement the NamespaceHandler interface Caused by: java.lang.IllegalArgumentException: Class [org.apache.cxf.transport.http_jetty.spring.NamespaceHandler] does not implement the NamespaceHandler interface at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.initHandlerMappings(DefaultNamespaceHandlerResolver.java:119) at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.(DefaultNamespaceHandlerResolver.java:96) at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.(DefaultNamespaceHandlerResolver.java:82) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.createDefaultNamespaceHandlerResolver(XmlBeanDefinitionReader.java:526) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.createReaderContext(XmlBeanDefinitionReader.java:515) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:495) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:340) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:317) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:125) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:141) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:123) Thanks, Rahul.

    Read the article

  • ProxyPass: discard body data

    - by Kay
    I have some rules like <Location /xyz> ProxyPass http://example.com/abc ... </Location> I want to accept requests to http://mypage.lan/xyz/123 and deliver the data of http://example.com/abc/123. I need to accept POST request, but I don't want to send the body content to example.com. I would like to send a GET request, but a POST request with Content-Length: 0 would be fine, too. Is it possible, to configure Apache 2 not to promote the request body?

    Read the article

  • Why can blocked IPs get through my iptables? What's wrong with this configuration?

    - by NeedSomeHelp
    (Why can/How are) blocked IPs (get/getting) through my iptables? Hello and thanks for your consideration... I have configured iptables and included (below) output from the command "iptables --line-numbers -n -L" yet IP addresses (like 31.41.219.180) from IP blocks I have already blocked are getting through. Please take a look and share any input you may have. Thank you. P.S. The initial ACCEPT IP addresses are for CloudFlare. . Chain INPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 32267 14M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 149 8570 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT udp -- * * 103.21.244.0/22 0.0.0.0/0 6 0 0 ACCEPT udp -- * * 103.22.200.0/22 0.0.0.0/0 7 0 0 ACCEPT udp -- * * 103.31.4.0/22 0.0.0.0/0 8 0 0 ACCEPT udp -- * * 104.16.0.0/12 0.0.0.0/0 9 0 0 ACCEPT udp -- * * 108.162.192.0/18 0.0.0.0/0 10 0 0 ACCEPT udp -- * * 141.101.64.0/18 0.0.0.0/0 11 0 0 ACCEPT udp -- * * 162.158.0.0/15 0.0.0.0/0 12 0 0 ACCEPT udp -- * * 173.245.48.0/20 0.0.0.0/0 13 0 0 ACCEPT udp -- * * 188.114.96.0/20 0.0.0.0/0 14 0 0 ACCEPT udp -- * * 190.93.240.0/20 0.0.0.0/0 15 0 0 ACCEPT udp -- * * 197.234.240.0/22 0.0.0.0/0 16 0 0 ACCEPT udp -- * * 198.41.128.0/17 0.0.0.0/0 17 0 0 ACCEPT udp -- * * 199.27.128.0/21 0.0.0.0/0 18 0 0 ACCEPT tcp -- * * 103.21.244.0/22 0.0.0.0/0 19 9 468 ACCEPT tcp -- * * 103.22.200.0/22 0.0.0.0/0 20 0 0 ACCEPT tcp -- * * 103.31.4.0/22 0.0.0.0/0 21 0 0 ACCEPT tcp -- * * 104.16.0.0/12 0.0.0.0/0 22 858 44616 ACCEPT tcp -- * * 108.162.192.0/18 0.0.0.0/0 23 376 19552 ACCEPT tcp -- * * 141.101.64.0/18 0.0.0.0/0 24 0 0 ACCEPT tcp -- * * 162.158.0.0/15 0.0.0.0/0 25 257 13364 ACCEPT tcp -- * * 173.245.48.0/20 0.0.0.0/0 26 0 0 ACCEPT tcp -- * * 188.114.96.0/20 0.0.0.0/0 27 0 0 ACCEPT tcp -- * * 190.93.240.0/20 0.0.0.0/0 28 0 0 ACCEPT tcp -- * * 197.234.240.0/22 0.0.0.0/0 29 0 0 ACCEPT tcp -- * * 198.41.128.0/17 0.0.0.0/0 30 92 4784 ACCEPT tcp -- * * 199.27.128.0/21 0.0.0.0/0 31 0 0 DROP tcp -- * * 1.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 32 0 0 DROP tcp -- * * 101.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 33 0 0 DROP tcp -- * * 102.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 34 0 0 DROP tcp -- * * 103.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 35 18 1080 DROP tcp -- * * 109.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 36 0 0 DROP tcp -- * * 112.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 37 12 656 DROP tcp -- * * 113.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 38 0 0 DROP tcp -- * * 114.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 39 0 0 DROP tcp -- * * 115.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 40 8 352 DROP tcp -- * * 116.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 41 0 0 DROP tcp -- * * 117.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 42 0 0 DROP tcp -- * * 118.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 43 2 120 DROP tcp -- * * 119.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 44 0 0 DROP tcp -- * * 120.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 45 0 0 DROP tcp -- * * 121.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 46 4 160 DROP tcp -- * * 122.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 47 4 240 DROP tcp -- * * 123.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 48 0 0 DROP tcp -- * * 125.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 49 0 0 DROP tcp -- * * 134.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 50 0 0 DROP tcp -- * * 146.185.0.0/16 0.0.0.0/0 tcp dpts:1:50000 51 6 360 DROP tcp -- * * 148.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 52 0 0 DROP tcp -- * * 151.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 53 0 0 DROP tcp -- * * 175.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 54 0 0 DROP tcp -- * * 176.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 55 0 0 DROP tcp -- * * 177.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 56 46 2696 DROP tcp -- * * 178.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 57 0 0 DROP tcp -- * * 179.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 58 4 224 DROP tcp -- * * 180.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 59 0 0 DROP tcp -- * * 181.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 60 0 0 DROP tcp -- * * 182.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 61 34 2040 DROP tcp -- * * 183.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 62 0 0 DROP tcp -- * * 185.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 63 0 0 DROP tcp -- * * 186.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 64 0 0 DROP tcp -- * * 187.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 65 18 912 DROP tcp -- * * 188.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 66 0 0 DROP tcp -- * * 189.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 67 0 0 DROP tcp -- * * 190.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 68 2 120 DROP tcp -- * * 192.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 69 0 0 DROP tcp -- * * 196.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 70 0 0 DROP tcp -- * * 197.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 71 5 300 DROP tcp -- * * 198.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 72 0 0 DROP tcp -- * * 2.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 73 0 0 DROP tcp -- * * 200.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 74 0 0 DROP tcp -- * * 201.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 75 6 360 DROP tcp -- * * 202.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 76 0 0 DROP tcp -- * * 203.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 77 4 160 DROP tcp -- * * 210.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 78 0 0 DROP tcp -- * * 211.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 79 2 96 DROP tcp -- * * 212.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 80 4 240 DROP tcp -- * * 213.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 81 0 0 DROP tcp -- * * 214.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 82 0 0 DROP tcp -- * * 215.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 83 0 0 DROP tcp -- * * 216.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 84 0 0 DROP tcp -- * * 217.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 85 4 172 DROP tcp -- * * 218.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 86 12 576 DROP tcp -- * * 219.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 87 7 372 DROP tcp -- * * 220.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 88 0 0 DROP tcp -- * * 222.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 89 0 0 DROP tcp -- * * 27.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 90 12 608 DROP tcp -- * * 31.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 91 11 528 DROP tcp -- * * 37.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 92 0 0 DROP tcp -- * * 41.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 93 0 0 DROP tcp -- * * 42.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 94 0 0 DROP tcp -- * * 43.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 95 8 480 DROP tcp -- * * 46.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 96 0 0 DROP tcp -- * * 49.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 97 6 360 DROP tcp -- * * 5.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 98 0 0 DROP tcp -- * * 58.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 99 0 0 DROP tcp -- * * 60.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 100 4 160 DROP tcp -- * * 61.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 101 32 1848 DROP tcp -- * * 62.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 102 0 0 DROP tcp -- * * 63.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 103 20 1200 DROP tcp -- * * 64.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 104 0 0 DROP tcp -- * * 65.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 105 266 15960 DROP tcp -- * * 66.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 106 3 180 DROP tcp -- * * 69.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 107 5 272 DROP tcp -- * * 72.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 108 0 0 DROP tcp -- * * 78.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 109 0 0 DROP tcp -- * * 81.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 110 3 180 DROP tcp -- * * 82.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 111 0 0 DROP tcp -- * * 83.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 112 8 384 DROP tcp -- * * 84.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 113 0 0 DROP tcp -- * * 85.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 114 0 0 DROP tcp -- * * 86.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 115 6 360 DROP tcp -- * * 87.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 116 7 408 DROP tcp -- * * 88.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 117 0 0 DROP tcp -- * * 89.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 118 0 0 DROP tcp -- * * 90.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 119 0 0 DROP tcp -- * * 91.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 120 3 152 DROP tcp -- * * 92.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 121 20 992 DROP tcp -- * * 93.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 122 9 512 DROP tcp -- * * 94.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 123 5 272 DROP tcp -- * * 95.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 124 0 0 DROP udp -- * * 1.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 125 0 0 DROP udp -- * * 101.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 126 0 0 DROP udp -- * * 102.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 127 0 0 DROP udp -- * * 103.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 128 0 0 DROP udp -- * * 109.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 129 0 0 DROP udp -- * * 112.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 130 0 0 DROP udp -- * * 113.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 131 0 0 DROP udp -- * * 114.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 132 1 112 DROP udp -- * * 115.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 133 0 0 DROP udp -- * * 116.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 134 0 0 DROP udp -- * * 117.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 135 0 0 DROP udp -- * * 118.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 136 0 0 DROP udp -- * * 119.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 137 0 0 DROP udp -- * * 120.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 138 0 0 DROP udp -- * * 121.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 139 0 0 DROP udp -- * * 122.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 140 0 0 DROP udp -- * * 123.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 141 0 0 DROP udp -- * * 125.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 142 0 0 DROP udp -- * * 134.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 143 0 0 DROP udp -- * * 146.185.0.0/16 0.0.0.0/0 udp dpts:1:50000 144 0 0 DROP udp -- * * 148.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 145 0 0 DROP udp -- * * 151.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 146 0 0 DROP udp -- * * 175.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 147 0 0 DROP udp -- * * 176.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 148 1 70 DROP udp -- * * 177.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 149 0 0 DROP udp -- * * 178.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 150 0 0 DROP udp -- * * 179.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 151 0 0 DROP udp -- * * 180.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 152 0 0 DROP udp -- * * 181.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 153 0 0 DROP udp -- * * 182.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 154 0 0 DROP udp -- * * 183.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 155 0 0 DROP udp -- * * 185.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 156 1 74 DROP udp -- * * 186.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 157 0 0 DROP udp -- * * 187.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 158 0 0 DROP udp -- * * 188.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 159 0 0 DROP udp -- * * 189.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 160 0 0 DROP udp -- * * 190.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 161 0 0 DROP udp -- * * 192.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 162 0 0 DROP udp -- * * 196.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 163 0 0 DROP udp -- * * 197.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 164 0 0 DROP udp -- * * 198.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 165 0 0 DROP udp -- * * 2.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 166 0 0 DROP udp -- * * 200.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 167 0 0 DROP udp -- * * 201.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 168 0 0 DROP udp -- * * 202.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 169 0 0 DROP udp -- * * 203.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 170 0 0 DROP udp -- * * 210.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 171 0 0 DROP udp -- * * 211.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 172 0 0 DROP udp -- * * 212.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 173 0 0 DROP udp -- * * 213.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 174 0 0 DROP udp -- * * 214.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 175 0 0 DROP udp -- * * 215.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 176 0 0 DROP udp -- * * 216.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 177 0 0 DROP udp -- * * 217.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 178 1 80 DROP udp -- * * 218.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 179 0 0 DROP udp -- * * 219.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 180 0 0 DROP udp -- * * 220.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 181 0 0 DROP udp -- * * 222.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 182 0 0 DROP udp -- * * 27.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 183 0 0 DROP udp -- * * 31.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 184 0 0 DROP udp -- * * 37.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 185 0 0 DROP udp -- * * 41.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 186 0 0 DROP udp -- * * 42.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 187 0 0 DROP udp -- * * 43.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 188 0 0 DROP udp -- * * 46.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 189 0 0 DROP udp -- * * 49.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 190 0 0 DROP udp -- * * 5.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 191 0 0 DROP udp -- * * 58.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 192 0 0 DROP udp -- * * 60.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 193 0 0 DROP udp -- * * 61.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 194 0 0 DROP udp -- * * 62.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 195 0 0 DROP udp -- * * 63.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 196 0 0 DROP udp -- * * 64.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 197 0 0 DROP udp -- * * 65.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 198 0 0 DROP udp -- * * 66.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 199 0 0 DROP udp -- * * 69.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 200 0 0 DROP udp -- * * 72.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 201 0 0 DROP udp -- * * 78.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 202 0 0 DROP udp -- * * 81.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 203 0 0 DROP udp -- * * 82.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 204 0 0 DROP udp -- * * 83.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 205 0 0 DROP udp -- * * 84.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 206 0 0 DROP udp -- * * 85.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 207 0 0 DROP udp -- * * 86.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 208 0 0 DROP udp -- * * 87.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 209 0 0 DROP udp -- * * 88.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 210 0 0 DROP udp -- * * 89.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 211 0 0 DROP udp -- * * 90.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 212 0 0 DROP udp -- * * 91.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 213 0 0 DROP udp -- * * 92.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 214 2 72 DROP udp -- * * 93.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 215 0 0 DROP udp -- * * 94.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 216 0 0 DROP udp -- * * 95.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 217 0 0 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:12443 218 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11443 219 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11444 220 23 1104 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8447 221 24 1152 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8443 222 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8880 223 207 11096 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 224 19 996 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 225 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 226 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 227 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 228 4 216 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 229 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:465 230 14 840 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 231 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 232 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 233 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 234 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:106 235 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 236 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5432 237 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9008 238 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9080 239 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:137 240 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:138 241 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 242 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 243 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 244 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 245 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 246 73 4488 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 code 0 247 77 23598 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 0 0 ACCEPT all -- lo lo 0.0.0.0/0 0.0.0.0/0 5 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 31004 25M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 1 333 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 5 328 21324 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

  • Resolving “ssl handshake failure” error in PostgresQL

    - by Mitch
    I would like to connect to my Postgres 8.3 database using SSL from my XP client using OpenSSL. This works fine without SSL. When I try it with SSL (no client certificate), I get the error: error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure I have followed the instructions in the Postgres manual for SSL including creating a self-signed certificate. In my pg_hba.conf there is a line: host dbname loginname 123.45.67.89/32 md5 The version of OpenSSL on the server is 0.9.8g and on the client is 0.9.8j. I'd appreciate any suggestions for tracking down the problem. Edit: The uncommented lines from postgresql.conf are: data_directory = '/var/ebs0/postgres/main' hba_file = '/etc/postgresql/8.3/main/pg_hba.conf' ident_file = '/etc/postgresql/8.3/main/pg_ident.conf' external_pid_file = '/var/run/postgresql/8.3-main.pid' listen_addresses = '*' port = 5432 max_connections = 100 unix_socket_directory = '/var/run/postgresql' ssl = true shared_buffers = 24MB

    Read the article

  • Unable to use pbcopy while in tmux session

    - by user62139
    Running tmux 1.4 installed from ports on snow-leopard I am unable to use the built in OSX pbcopy command. Outside of tmux: > echo "abc" | pbcopy > echo pbpaste # or using ^v abc But inside of tmux: > echo "123" | pbcopy > echo pbpaste abc I've scoured the man page but can't find any options that might relate to this behaivor. I also can't understand why tmux would mess with shell redirection. Anybody have any clues?

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >