Search Results

Search found 2916 results on 117 pages for 'prototype chain'.

Page 30/117 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Compte-rendu Android LiveCode #5 : créer un jeu Android en 1 h avec Project Anarchy

    Bonjour,J'ai eu l'occasion d'assister à la LiveCode Android #5 à Paris, dans l'école d'art digital Isart, pour laquelle Stuart Johnson, développeur chez Havok, avait été invité. Au cours de celle-ci, il a utilisé Project Anarchy pour réaliser un prototype de jeu 3D Android en une heure.Voici donc le compte-rendu de la séance et la page vidéo associée.Durant cette séance, Stuart Johnson nous explique ce qu'est Project Anarchy et nous montre étape par étape comment créer un prototype de jeu en une...

    Read the article

  • Firefox pour Windows 8 : premier build disponible, une pré-version encore en chantier mais déjà fonctionnelle

    Firefox Metro se concrétise Mozilla publie les images d'un premier prototype pour Windows 8 Mise à jour du 03/04/2012 Le port de Firefox sur l'environnement Metro de Windows 8 se confirme. Quelques jours seulement après la confirmation des plans de développement d'une version de Firefox pour Windows 8, la fondation Mozilla livre déjà les premiers résultats de ses travaux. L'organisme vient de publier les images d'un prototype basé sur le code source de Fennec (Firefox pour mobile) et le langage d'interface utilisateur XUL. [IMG]http://rdonfack.developpez.com/images/metro-startf.jpg[/IMG...

    Read the article

  • SSH stops at "using username" with IPTables in effect

    - by Rautamiekka
    We used UFW but couldn't make the Source Dedicated ports open, which was weird, so we purged UFW and switched to IPTables, using Webmin to configure. If the inbound chain is on DENY and SSH port open [judged from Webmin], PuTTY will say using username "root" and stops at that instead of asking for public key pw. Inbound chain on ACCEPT the pw is asked. This problem didn't happen with UFW. Picture of IPTables configuration in Webmin: http://s284544448.onlinehome.us/public/PlusLINE%20Dedicated%20Server,%20Webmin,%20IPTables,%200.jpgThe address is to the previous rautamiekka.org. iptables-save when on INPUT DENY: # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *mangle :PREROUTING ACCEPT [1430:156843] :INPUT ACCEPT [1430:156843] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1415:781598] :POSTROUTING ACCEPT [1415:781598] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *nat :PREROUTING ACCEPT [2:104] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1247:708906] -A INPUT -i lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT -A INPUT -p tcp -m comment --comment "Services - TCP" -m tcp -m multiport --dports 22,80,443,10000,20,21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m comment --comment "Minecraft - TCP" -m tcp --dport 25565 -j ACCEPT -A INPUT -p udp -m comment --comment "Minecraft - UDP" -m udp --dport 25565 -j ACCEPT -A INPUT -p tcp -m comment --comment "Source Dedicated - TCP" -m tcp --dport 27015 -j ACCEPT -A INPUT -p udp -m comment --comment "Source Dedicated - UDP" -m udp -m multiport --dports 4380,27000:27030 -j ACCEPT -A INPUT -p udp -m comment --comment "TS3 - UDP - main port" -m udp --dport 9987 -j ACCEPT -A INPUT -p tcp -m comment --comment "TS3 - TCP - ServerQuery" -m tcp --dport 10011 -j ACCEPT -A OUTPUT -o lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT COMMIT # Completed on Wed Apr 11 16:09:20 2012 iptables --list when on INPUT DENY: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ ACCEPT tcp -- anywhere anywhere /* Services - TCP */ tcp multiport dports ssh,www,https,webmin,ftp-data,ftp state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere /* Minecraft - TCP */ tcp dpt:25565 ACCEPT udp -- anywhere anywhere /* Minecraft - UDP */ udp dpt:25565 ACCEPT tcp -- anywhere anywhere /* Source Dedicated - TCP */ tcp dpt:27015 ACCEPT udp -- anywhere anywhere /* Source Dedicated - UDP */ udp multiport dports 4380,27000:27030 ACCEPT udp -- anywhere anywhere /* TS3 - UDP - main port */ udp dpt:9987 ACCEPT tcp -- anywhere anywhere /* TS3 - TCP - ServerQuery */ tcp dpt:10011 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ The UFW rules prior to purging on INPUT DENY: 127.0.0.1 ALLOW IN 127.0.0.1 3306 DENY IN Anywhere 20,21/tcp ALLOW IN Anywhere 22/tcp (OpenSSH) ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere 989 ALLOW IN Anywhere 990 ALLOW IN Anywhere 8075/tcp ALLOW IN Anywhere 9987/udp ALLOW IN Anywhere 10000/tcp ALLOW IN Anywhere 10011/tcp ALLOW IN Anywhere 25565/tcp ALLOW IN Anywhere 27000:27030/tcp ALLOW IN Anywhere 4380/udp ALLOW IN Anywhere 27014:27050/tcp ALLOW IN Anywhere 30033/tcp ALLOW IN Anywhere

    Read the article

  • iptables -P FORWARD DROP makes port forwarding slow

    - by Isaac
    I have three computers, linked like this: box1 (ubuntu) box2 router & gateway (debian) box3 (opensuse) [10.0.1.1] ---- [10.0.1.18,10.0.2.18,10.0.3.18] ---- [10.0.3.15] | box4, www [10.0.2.1] Among other things I want box2 to do nat and port forwarding, so that I can do ssh -p 2223 box2 to reach box3. For this I have the following iptables script: #!/bin/bash # flush iptables -F INPUT iptables -F FORWARD iptables -F OUTPUT iptables -t nat -F PREROUTING iptables -t nat -F POSTROUTING iptables -t nat -F OUTPUT # default default_action=DROP for chain in INPUT OUTPUT;do iptables -P $chain $default_action done iptables -P FORWARD DROP # allow ssh to local computer allowed_ssh_clients="10.0.1.1 10.0.3.15" for ip in $allowed_ssh_clients;do iptables -A OUTPUT -p tcp --sport 22 -d $ip -j ACCEPT iptables -A INPUT -p tcp --dport 22 -s $ip -j ACCEPT done # allow DNS iptables -A OUTPUT -p udp --dport 53 -m state \ --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p udp --sport 53 -m state \ --state ESTABLISHED,RELATED -j ACCEPT # allow HTTP & HTTPS iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -A INPUT -p tcp -m multiport --sports 80,443 -j ACCEPT # # ROUTING # # allow routing echo 1 >/proc/sys/net/ipv4/ip_forward # nat iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # http iptables -A FORWARD -p tcp --dport 80 -j ACCEPT iptables -A FORWARD -p tcp --sport 80 -j ACCEPT # ssh redirect iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 2223 -j DNAT \ --to-destination 10.0.3.15:22 iptables -A FORWARD -p tcp --sport 22 -j ACCEPT iptables -A FORWARD -p tcp --dport 22 -j ACCEPT iptables -A FORWARD -p tcp --sport 1024:65535 -j ACCEPT iptables -A FORWARD -p tcp --dport 1024:65535 -j ACCEPT iptables -I FORWARD -j LOG --log-prefix "iptables denied: " While this works, it takes about 10 seconds to get a password promt from my ssh command. Afterwards, the connection is as responsive as could be. If I change the default policy for my FORWARD chain to "ACCEPT", then the password promt is there imediatly. I have tried analysing the logs, but I can not spot a difference in the logs for ACCEPT/DROP in my FORWARD chain. Also I have tried allowing all the unprivileged ports, as box1 uses thoses for doing ssh to box2. Any hints? (If the whole setup seems strange to you - the point of the exercise is to understand iptables ;))

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Can access SSH but can't access cPanel web server

    - by Tom
    I've built a Cent OS 6.0 vps and then i've installed the latest cPanel/WHM. This isn't my first installation but i've noticed something weird especially that i've never used the 6.0 version.. when i tried to install cPanel, it didn't recognize wget so installed it, then cPanel said that Perl isn't installed, i've installed that and the installation went well since then. Now, when i've tried to access the server via the browser with the IP Adress as i've used to, it didn't work, it was just loading forever, i tried the 2087 port, still the same. but SSH works. I've also tried the commands to start the server manually but none of them worked. How to fix that? Edit: iptables -nL Result root@server [~]# iptables -nL Chain INPUT (policy ACCEPT) target prot opt source destination acctboth all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination acctboth all -- 0.0.0.0/0 0.0.0.0/0 Chain acctboth (2 references) target prot opt source destination tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:80 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:80 tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:25 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:25 tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:110 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:110 icmp -- 216.119.149.168 0.0.0.0/0 icmp -- 0.0.0.0/0 216.119.149.168 tcp -- 216.119.149.168 0.0.0.0/0 tcp -- 0.0.0.0/0 216.119.149.168 udp -- 216.119.149.168 0.0.0.0/0 udp -- 0.0.0.0/0 216.119.149.168 all -- 216.119.149.168 0.0.0.0/0 all -- 0.0.0.0/0 216.119.149.168 all -- 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Connection refused after installing vsftp on Ubuntu 8.04 with fail2ban

    - by Patrick
    I have been using an Ubuntu 8.04 server with fail2ban for a while now (12+ months) and using ftp over SSH without any problems. I have a new user that needs to put files onto the server from an IP modem. I have installed vsftp (sudo apt-get install vsftp) and everything installed correctly. I have created an ftp user on the server following this guide. Whenever I try to connect to the server with my ftp program (filezilla) I get an immediate response of: Connection attempt failed with "ECONNREFUSED - Connection refused by server". I have looked into fail2ban and cannot find any problems. The iptables setup is: Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere VSFTP config file (commented lines removed) listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=[username] secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Any ideas on what is preventing access to the server?

    Read the article

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • iptables -- OK, **now** am I doing it right?

    - by Agvorth
    This is a follow up to a previous question where I asked whether my iptables config is correct. CentOS 5.3 system. Intended result: block everything except ping, ssh, Apache, and SSL. Based on xenoterracide's advice and the other responses to the question (thanks guys), I created this script: # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP Now when I list the rules I get... # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 9 612 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https 0 0 DROP all -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5 packets, 644 bytes) pkts bytes target prot opt in out source destination I ran it and I can still log in, so that's good. Anyone notice anything major out of wack?

    Read the article

  • iptables port redirection on Ubuntu

    - by Xi.
    I have an apache server running on 8100. When open http://localhost:8100 in browser we will see the site running correctly. Now I would like to direct all request on 80 to 8100 so that the site can be accessed without the port number. I am not familiar with iptables so I searched for solutions online. This is one of the methods that I have tried: user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 8100 -j ACCEPT user@ubuntu:~$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8100 It's not working. The site works on 8100 but it's not on 80. If print out the rules using "iptables -t nat -L -n -v", this is what I see: user@ubuntu:~$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8100 Chain INPUT (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination The OS is a Ubuntu on a VMware. I thought this should be a simple task but I have been working on it for hours without success. :( What am I missing?

    Read the article

  • Parner Webcast - Innovations in Products Program

    - by Richard Lefebvre
    We are pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Topics Speaker To Register Fusion Applications Technology and Extensibility: A next-generation platform that adapts to client needs. Matthew Johnson, Sr. Director, SCM Product Development, EMEA CLICK HERE Fusion Applications - Transforming your Back-Office Accounting Function: Changing how people work in back office functions to drive value add Liam Nolan, Director, ERP Product Development, EMEA CLICK HERE Fusion HCM & Talent Overview & Extensibility: A more in-depth look into a personalized HCM solution Synco Jonkeren, Vice-President HCM Product Development & Management, EMEA CLICK HERE Fusion HCM Compensation Planning: Compensate To Compete Rosie Warner, Director, HCM Sales Development CLICK HERE Enterprise PLM for the Product Value Chain: Oracle Enterprise PLM offers Industry specific solutions that cover the Product Value Chain Ulf Köster, Sales Development Leader Enterprise PLM, Oracle Western Europe CLICK HERE Oracle's Asset Management and Maintenance Solution: What you need to know to successfully implement Oracle Asset Management solutions within Oracle Installed Base Philip Carey, Asset Management and Maintenance Solution Specialist CLICK HERE For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration Maximum 1 hour For further information please contact me Markku Rouhiainen. Recent Innovations in Products presentations Applications Products presented on April the 2nd, 2012 Speaker To Register Fusion CRM: Effective, Efficient and Easy James Penfold , Senior Director, Applications Product Development and Product Management CLICK HERE Fusion HCM: Talent management overview performance, goals, talent review Jaime Losantos Viñolas, Director, HCM Sales Development CLICK HERE Distributed Order Management - Fusion SCM Solution Vikram K Singla, Business Development Director, Supply Chain Management Applications, UK CLICK HERE Oracle Transportation Management Dominic Regan, Senior Director Oracle Transportation Management EMEA CLICK HERE Oracle Value Chain Planning: Demantra Sales & Operation Planning and Demantra Demand Management Lionel Albert, Senior Director Value Chain Planning, EMEA CLICK HERE Oracle CX (Customer Experience) - formerly CEM: Powering Great Customer Experiences Maria Ramirez , CRM Presales Consultant, EPC CLICK HERE EPM 11.1.2.2 Overview Nicholas Cox , EMEA Sales Development Director - Enterprise Performance Management CLICK HERE Oracle Hyperion Profitability and Cost Management, 11.1.2.1 Daniela Lazar , Senior EPM Sales Consultant, EPC CLICK HERE January the 16th 2012 Speaker To Register CRM / ATG: Best-in-Class CRM & Commerce Maria Ramirez , Associate CRM Presales Consultant, EPC CLICK HERE CRM / Automate Business Rules for Maximum Efficiency with OPA (Oracle Policy Automation) Marco Nilo, Associate CRM Presales Consultant, EPC CLICK HERE CRM / InQuira Toby Baker, Principal Sales Consultant, CRM Product Specialist Team CLICK HERE EPM / Business Intelligence Foundation Suite – Sales and Product Updates Liviu Nitescu, Senior BI Sales Consultant, EPC CLICK HERE EPM / Hyperion Planning 11.1.2.1 - Sales & Product Updates Andreea Voinea, EPM Sales Consultant, EPC CLICK HERE ERP / JDE EnterpriseOne Fulfillment Management Overview Mirela Andreea Nasta , ERP Presales Consultant, EPC CLICK HERE ERP / Spotlights on iExpenses Elena Nita ,ERP Presales Consultant, EPC CLICK HERE MDM / Master Data Management Martin Boyd , Senior Director Product Strategy CLICK HERE Product break through session Fusion Applications Human Capital Management Rosie Warner , Director, HCM Sales Development CLICK HERE Recent Innovations in Industries Value Proposition presentations January the 16th 2012 Speaker To Register Process Modernisation Iemke Idsingh Public Sector Solutions Director CLICK HERE Shared Services Ann Smith Business Development Director, Shared Services CLICK HERE Strengthening Financial Discipline Whilst Delivering Cashable Savings Philippa Headley UK Sales Development Director Public Sector - EPM Solutions CLICK HERE Social Welfare Industry Solutions Christian Wernberg-Tougaard Industry Director - Social Welfare CLICK HERE Police Industry Solutions Jeff Penrose Solution Sales Director CLICK HERE Tax and Revenue Management Industry Solutions Andre van der Post Global Director - Tax Solutions and Strategy CLICK HERE  

    Read the article

  • Why do ICMP Redirct Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Why do ICMP Redirect Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Using Handlebars.js issue

    - by Roland
    I'm having a small issue when I'm compiling a template with Handlebars.js . I have a JSON text file which contains an big array with objects : Source ; and I'm using XMLHTTPRequest to get it and then parse it so I can use it when compiling the template. So far the template has the following structure : <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper"> <img src="{{ThumbnailUrl}}"> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}">Google Maps</a> </div> </div> </div> <div class="right-side-content"></div> </div> And the following block of code would be the way I'm handling the JS part : $(document).ready(function() { /* Default Javascript Options ~a javascript object which contains all the variables that will be passed to the cluster class */ var default_cluster_options = { animations : ['flash', 'bounce', 'shake', 'tada', 'swing', 'wobble', 'wiggle', 'pulse', 'flip', 'flipInX', 'flipOutX', 'flipInY', 'flipOutY', 'fadeIn', 'fadeInUp', 'fadeInDown', 'fadeInLeft', 'fadeInRight', 'fadeInUpBig', 'fadeInDownBig', 'fadeInLeftBig', 'fadeInRightBig', 'fadeOut', 'fadeOutUp', 'fadeOutDown', 'fadeOutLeft', 'fadeOutRight', 'fadeOutUpBig', 'fadeOutDownBig', 'fadeOutLeftBig', 'fadeOutRightBig', 'bounceIn', 'bounceInUp', 'bounceInDown', 'bounceInLeft', 'bounceInRight', 'bounceOut', 'bounceOutUp', 'bounceOutDown', 'bounceOutLeft', 'bounceOutRight', 'rotateIn', 'rotateInDownLeft', 'rotateInDownRight', 'rotateInUpLeft', 'rotateInUpRight', 'rotateOut', 'rotateOutDownLeft', 'rotateOutDownRight', 'rotateOutUpLeft', 'rotateOutUpRight', 'lightSpeedIn', 'lightSpeedOut', 'hinge', 'rollIn', 'rollOut'], json_data_url : 'data.json', template_data_url : 'template.php', base_maps_api_url : 'https://maps.googleapis.com/maps/api/js?sensor=false', cluser_wrapper_id : '#content-wrapper', maps_wrapper_class : '.google-maps', }; /* Cluster ~main class, handles all javascript operations */ var Cluster = function(environment, cluster_options) { var self = this; this.options = $.extend({}, default_cluster_options, cluster_options); this.environment = environment; this.animations = this.options.animations; this.json_data_url = this.options.json_data_url; this.template_data_url = this.options.template_data_url; this.base_maps_api_url = this.options.base_maps_api_url; this.cluser_wrapper_id = this.options.cluser_wrapper_id; this.maps_wrapper_class = this.options.maps_wrapper_class; this.test_environment_mode(this.environment); this.initiate_environment(); this.test_xmlhttprequest_availability(); this.initiate_gmaps_lib_load(self.base_maps_api_url); this.initiate_data_processing(); }; /* Test Environment Mode ~adds a modernizr test which looks wheater the cluster class is initiated in development or not */ Cluster.prototype.test_environment_mode = function(environment) { var self = this; return Modernizr.addTest('test_environment', function() { return (typeof environment !== 'undefined' && environment !== null && environment === "Development") ? true : false; }); }; /* Test XMLHTTPRequest Availability ~adds a modernizr test which looks wheater the xmlhttprequest class is available or not in the browser, exception makes IE */ Cluster.prototype.test_xmlhttprequest_availability = function() { return Modernizr.addTest('test_xmlhttprequest', function() { return (typeof window.XMLHttpRequest === 'undefined' || window.XMLHttpRequest === null) ? true : false; }); }; /* Initiate Environment ~depending on what the modernizr test returns it puts LESS in the development mode or not */ Cluster.prototype.initiate_environment = function() { return (Modernizr.test_environment) ? (less.env = "development", less.watch()) : true; }; Cluster.prototype.initiate_gmaps_lib_load = function(lib_url) { return Modernizr.load(lib_url); }; /* Initiate XHR Request ~prototype function that creates an xmlhttprequest for processing json data from an separate json text file */ Cluster.prototype.initiate_xhr_request = function(url, mime_type) { var request, data; var self = this; (Modernizr.test_xmlhttprequest) ? request = new ActiveXObject('Microsoft.XMLHTTP') : request = new XMLHttpRequest(); request.onreadystatechange = function() { if(request.readyState == 4 && request.status == 200) { data = request.responseText; } }; request.open("GET", url, false); request.overrideMimeType(mime_type); request.send(); return data; }; Cluster.prototype.initiate_google_maps_action = function() { var self = this; return $(this.maps_wrapper_class).each(function(index, element) { return $(element).on('click', function(ev) { var html = $('<div id="map-canvas" class="map-canvas"></div>'); var latitude = $(element).attr('data-latitude'); var longitude = $(element).attr('data-longitude'); log("LAT : " + latitude); log("LON : " + longitude); $.lightbox(html, { "width": 900, "height": 250, "onOpen" : function() { } }); ev.preventDefault(); }); }); }; Cluster.prototype.initiate_data_processing = function() { var self = this; var json_data = JSON.parse(self.initiate_xhr_request(self.json_data_url, 'application/json; charset=ISO-8859-1')); var source_data = self.initiate_xhr_request(self.template_data_url, 'text/html'); var template = Handlebars.compile(source_data); for(var i = 0; i < json_data.length; i++ ) { var result = template(json_data[i]); $(result).appendTo(self.cluser_wrapper_id); } self.initiate_google_maps_action(); }; /* Cluster ~initiate the cluster class */ var cluster = new Cluster("Development"); }); My problem would be that I don't think I'm iterating the JSON object right or I'm using the template the wrong way because if you check this link : http://rolandgroza.com/labs/valtech/ ; you will see that there are some numbers there ( which represents latitude and longitude ) but they are all the same and if you take only a brief look at the JSON object each number is different. So what am I doing wrong that it makes the same number repeat ? Or what should I do to fix it ? I must notice that I've just started working with templates so I have little knowledge it.

    Read the article

  • iptables is not allowing me to contact my dns nameservers

    - by user1272737
    I have the follwing iptables rules: Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- localhost.localdomain anywhere tcp dpt:mysql ACCEPT tcp -- anywhere anywhere tcp dpt:14443 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:xxxxxxx Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination When I turn off iptables I am able to use wget and all other commands. When these rules are enabled I cannot connect to any address. Any idea why this would be?

    Read the article

  • IP6tables blocks INPUT? can't connect with youtube API

    - by klaas
    I thought to have a simple ipv6 firewall, but it turned out to be hell. Somehow I really can't connect with any ipv6 from my machine unless I set INPUT Policy to ACCEPT. Below my current ip6tables ip6tables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all anywhere anywhere state RELATED,ESTABLISHED ACCEPT ipv6-icmp anywhere anywhere ACCEPT tcp anywhere anywhere tcp dpt:http ACCEPT tcp anywhere anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination If I try to connect with any ipv6 adres it doesn't work? telnet gdata.youtube.com 80 Trying 2a00:1450:4013:c00::76... OR telnet gdata.youtube.com 443 Trying 2a00:1450:4013:c00::76... When I set: ip6tables -P INPUT ACCEPT It works.. but then.. well then everything is open? what is going on? Help?

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • svnserve accepts only local connection

    - by stiv
    I've installed svnserve in linux box konrad. On konrad I can checkout from svn: steve@konrad:~$ svn co svn://konrad A konrad/build.xml On my local Windows pc i can ping konrad, but checkout doesn work: C:\Projects>svn co svn://konrad svn: E730061: Unable to connect to a repository at URL 'svn://konrad' svn: E730061: Can't connect to host 'konrad': ??????????? ?? ???????????, ?.?. ???????? ????????? ?????? ?????? ?? ???????????. My linux firewall is disabled: konrad# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination and windows firewall is also off (I can't send screen shot here, so believe me). How can I fix that? Any ideas?

    Read the article

  • How to drop all subnets outside of the US using iptables

    - by Jim
    I want to block all subnets outside the US. I've made a script that has all of the US subnets in it. I want to disallow or DROP all but my list. Can someone give me an example of how I can start by denying everything? This is the output from -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:ftp state NEW DROP icmp -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination And these are the rules iptables --F iptables --policy INPUT DROP iptables --policy FORWARD DROP iptables --policy OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -i eth0 --dport 21 -m state --state NEW -j ACCEPT iptables -A INPUT -p icmp -j DROP Just for clarity, with these rules, I can still connect to port 21 without my subnet list. I want to block ALL subnets and just open those inside the US.

    Read the article

  • failing to achive tunneling to fresh ubuntu 10.04 server

    - by user65297
    I've just set up a new 10.04 server and can't get the tunneling to work. local machine > ssh -L 9090:localhost:9090 [email protected] login success, but thereafter trying tunnel from local browser, http://127.0.0.1:9090 echo at server terminal: channel 3: open failed: connect failed: Connection refused auth.log sshd[24502]: error: connect_to localhost port 9090: failed. iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Trying 9090 at server (links http://xx.xxx.xx.xx:9090 works) sshd_config is identical to previous 8.04 server, working fine. What's going on? Thankful for any input. Regards, //t

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >