Search Results

Search found 1976 results on 80 pages for 'nic wise'.

Page 66/80 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Is it possible to create a simple frontend indexer for openbittorent torrents?

    - by SimonK
    I run a website which distributes a few files every now and again, live music performances by a rock band. I create a torrent file, set the trackers as openbittorrent, publicbt and other similar open trackers. I upload the torrent file to my forum, my users download it and the files are shared. No problems there. What I would like to do is index those torrents properly on my website though so I can follow seeders/leechers and other stats online. I know the open torrent trackers don't have an index but I am aware of many, many indexing sites that do that exact thing. I don't know how though. So what I'm asking is what do I need to do to do that myself? I simply want to create a page that lists the torrents I and other users on my site create, the seeders/leechers ratio and a link to the torrent file etc. What data do I need to be able to do that? I'm proficient in general web design but I don't know what I would need data wise to pull the required info on the torrents? Thanks

    Read the article

  • running a laptop continuously

    - by intuited
    I have an experienced laptop — a Dell Latitude D400, with a Pentium M CPU — that I'd like to run as an always-on server. This model was launched in 2004; I got mine second-hand in about 2007. I've heard that continuous operation is generally not a good idea with consumer hardware, but am lacking in specific knowledge about related problems, and have little idea of how much such usage patterns would reduce the lifespan of the machine. I'm mostly concerned with the unit's core components; parts such as the hard drive which are readily replaceable are, well, readily replaceable. What sorts of things can I do to increase the lifespan of this machine under such circumstances? For example, I'm guessing that it would be wise to limit the CPU frequency or take other steps to keep the internal temperature low. However, I'm not sure where the point of diminishing returns would lie with such an approach — 50°C? 40°C? Would it be useful to suspend the machine periodically, for perhaps an hour each day, or a few hours each week?

    Read the article

  • PFSence VPN Routing

    - by SvrGuy
    We use PFSense firewalls at three installations with the following LAN networks: 1.) Datacenter #1: 10.0.0.0/16 2.) Datacenter #2: 10.1.0.0/16 3.) HQ: 10.2.0.0/16 All of these locations are linked via an IPSEC tunnel that works properly. Hosts in any of the above networks can communicate with hosts in any other of the above networks. Now, for our laptops etc. we established a road warrior network 10.3.0.0/16 and have implemented OpenVPN to link the laptops etc. to Datacenter #1. This works great too, so our laptops can connect and communicate with any host in Datacenter #1 (anything on 10.0.0.0/16) The problem is the laptops can't communicate with any hosts that Datacenter #1 can reach by its IPSEC tunnel to Datacenter #2 (and/or the HQ for that matter). Does anyone know what to do configuration wise on the PFSense box in Datacenter #1 to configure to route packets received on the OpenVPN tunnel to Datacenter #2 over the IPSEC tunnel? It could be a setting on the OpenVPN or some sort of static route or some such. Any ideas?

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • UDP packets are dropped when its size is less than 12 byte in a certain PC. how do i figure it out the reason?

    - by waan
    Hi. i've stuck in a problem that is never heard about before. i'm making an online game which uses UDP packets in a certain character action. after i developed the udp module, it seems to work fine. though most of our team members have no problem, but a man, who is my boss, told me something is wrong for that module. i have investigated the problem, and finally i found the fact that... on his PC, if udp packet size is less than 12, the packet is never have been delivered to the other host. the following is some additional information: 1~11 bytes udp packets are dropped, 12 bytes and over 12 bytes packets are OK. O/S: Microsoft Windows Vista Business NIC: Attansic L1 Gigabit Ethernet 10/100/1000Base-T Controller WSASendTo returns TRUE. loopback udp packet works fine. how do you think of this problem? and what do you think... what causes this problem? what should i do for the next step for the cause? PS. i don't want to padding which makes length of all the packets up to 12 bytes.

    Read the article

  • Different font SIZES in a Text Editor, based on Script(Alphabet) type (ie. per Unicode Code-Block)

    - by fred.bear
    Some non-Latin-based scripts(alphabets) have more detail in their glyphs than do the Latin-based-script equivalents, and typically need a larger font to give the same degree of legibility (resolution-wise). Sometimes, both script types need to be present in the same file. Notepad++ allows different font SIZES (and colour, etc) courtesy of syntax-highlighting. This allows me to display larger-fonted non-Latin-based script in a // BIG-FONT comment. Although this has been quite handy for me in some situations, it is quite limited. A Word Processor can handle this scenario, but I'm not interested in that. I want a nice simple(?) plain(?) Text Editor to do it... on a per script-type basis... eg. mixing Latin-1 and Devanagari (and Mandarin, and ... Such a thing may not exits, but Notepad++ has shown that a simple(?) plain(?) Text Editor is capable of it. Does anyone know of such a Text Editor? ...Q. Why not a Word Processor? ...A. Because GCC and Python don't like that format! but UTF-8 is fine.

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • What is the potential for a FUSE mount to destabilize a Linux server?

    - by 200_success
    I'm a sysadmin for a multi-user server, where students in our department have shell accounts. One of our users has requested that we install sshfs on it. I'm debating whether it would be wise to install sshfs as suggested. My main concern is whether a FUSE mount could make our server less reliable. In my experience, bad things can happen to servers when an NFS server suddenly becomes unavailable — the load average shoots up, and you might not be able to unmount it cleanly, to the point where a hard reboot might be necessary. If a FUSE-mounted server suddenly disappears, how hard might it be to clean up the mess? Are there any other likely catastrophes or gotchas I should consider? At least with NFS, only root can mount, and we can choose to mount NFS servers that we consider to be reasonably reliable. Let's assume that our users have no hostile intentions, but might do stupid things accidentally. Also, I'm not really worried about the contents of the filesystems they might mount, since our users already have shell access and can copy anything they want to their home directory.

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • windows 2003 server : can't find a primary authoritative dns server for the name srv.domain1.local [

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. any ideas? thanks in advance

    Read the article

  • openstack, bridging, netfilter and dnat

    - by Craig Sanders
    In a recent upgrade (from Openstack Diablo on Ubuntu Lucid to Openstack Essex on Ubuntu Precise), we found that DNS packets were frequently (almost always) dropped on the bridge interface (br100). For our compute-node hosts, that's a Mellanox MT26428 using the mlx4_en driver module. We've found two workarounds for this: Use an old lucid kernel (e.g. 2.6.32-41-generic). This causes other problems, in particular the lack of cgroups and the old version of the kvm and kvm_amd modules (we suspect the kvm module version is the source of a bug we're seeing where occasionally a VM will use 100% CPU). We've been running with this for the last few months, but can't stay here forever. With the newer Ubuntu Precise kernels (3.2.x), we've found that if we use sysctl to disable netfilter on bridge (see sysctl settings below) that DNS started working perfectly again. We thought this was the solution to our problem until we realised that turning off netfilter on the bridge interface will, of course, mean that the DNAT rule to redirect VM requests for the nova-api-metadata server (i.e. redirect packets destined for 169.254.169.254:80 to compute-node's-IP:8775) will be completely bypassed. Long-story short: with 3.x kernels, we can have reliable networking and broken metadata service or we can have broken networking and a metadata service that would work fine if there were any VMs to service. We haven't yet found a way to have both. Anyone seen this problem or anything like it before? got a fix? or a pointer in the right direction? Our suspicion is that it's specific to the Mellanox driver, but we're not sure of that (we've tried several different versions of the mlx4_en driver, starting with the version built-in to the 3.2.x kernels all the way up to the latest 1.5.8.3 driver from the mellanox web site. The mlx4_en driver in the 3.5.x kernel from Quantal doesn't work at all) BTW, our compute nodes have supermicro H8DGT motherboards with built-in mellanox NIC: 02:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0) we're not using the other two NICs in the system, only the Mellanox and the IPMI card are connected. Bridge netfilter sysctl settings: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 Since discovering this bridge-nf sysctl workaround, we've found a few pages on the net recommending exactly this (including Openstack's latest network troubleshooting page and a launchpad bug report that linked to this blog-post that has a great description of the problem and the solution)....it's easier to find stuff when you know what to search for :), but we haven't found anything on the DNAT issue that it causes.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Issue with Netgear GS108T Managed Switch and Jumbo Frames

    - by Richie086
    I recently purchased a Netgear GS108T managed switch and I am trying to configure jumbo packets between my NAS (Thecus N4100Pro), PC and managed switch. I should mention the fact that I was able to use jumbo frames between my PC and NAS before I purchased the switch without issue. My Desktop has a wired gigabit NIC (Intel 82579V Gigabit) and has the ability to configure jumbo frames (see pic) that are either 9014 bytes or 4088 bytes. I choose 9014 bytes for the jumbo frame size My NAS supports jumbo frames as well, and is configured to use 9014 as the frame size. When I go into my Netgear managed switch and set the frame size to 9014 on the ports I am using for my PC and NAS. See image As soon as I hit apply in the web interface, I loose my connection to the SMB shares on my NAS and I can no longer connect to the web admin interface for my NAS. The really strange thing is I can ping my NAS via the ping command, but when I try to connect to the web interface on port 80 or port 443 the page never loads. I did a scan from my PC to my NAS using nmap and I can see the following ports open PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 631/tcp open ipp 2000/tcp open cisco-sccp 2049/tcp open nfs 3260/tcp open iscsi 49152/tcp open unknown MAC Address: 00:14:FD:15:00:44 (Thecus Technology) Read data files from: C:\Program Files (x86)\Nmap Nmap done: 1 IP address (1 host up) scanned in 211.97 seconds Raw packets sent: 1 (28B) | Rcvd: 1 (28B) Anyone have any idea what is going on here? Why is nmap able to detect the ports are open and listening for http, https and file sharing but I cant connect when all devices have jumbo packets enabled? Stranger still - I did a packet capture using wireshark while the nmap scan was running and filtered so I only saw converstations between my PC and my NAS. Here are the packet details from my scan Only 4 packets over 5k bytes? What is going on here? Do I not need to configure jumbo frame sizes on the switch? I have an internet connection from my pc to the switch to my router - I just cannot connect to my NAS. I just checked on my iPhone and I am able to open my NAS web admin interface without issue on my iPhone! WTF!!!!!! Let me know if you need more details..

    Read the article

  • Problem with authentication of users via IE when using "host header value"

    - by Richard
    Hi, I'm trying to set multiple web sites up in an IIS 6. I've got a working virtual site residing under the default web site, but if I create a new web site in the IIS and asign it a host header value, let it point to the very same file structure as in the prevoiusly mentioned site and finally asign windows integrated security only to the site - I still cannot log in to the new site using MSIE 6 or 8 but FF 3.5 works fine. In the web log I get these entries if I access the localhost site 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/Default.asp - 80 xxx\Administrator 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 200 0 0 If I however access via the host headre value site I get prompted to login but the login fail and I also get an error "401 1 2148074252" which not present when it succeeds. Can this be the issue? Pre login screen 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 post login screen (note that win credentials have not been submitted) 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 Firefox will try to access using anonymous access and will prompt for login, after submitting win credentials it all works fine. For what reasoon is the IE so stubornly refusing to submit credentials to the "host header value" site? The site is in the Local intranet Zone and login is ticked for that zone. No teaming NIC's no FW, no nothing, I'm cluless :( /Richard

    Read the article

  • Windows 7 laptop constantly dropping internet connection

    - by Corbin Holbert
    I have scoured the web for an answer and failed to find one. I am using a Toshiba Satellite laptop running Windows 7 64-Bit. I have the computer connected via Wifi. Now, I am no beginner with the Internet, or anything related to computers, as I have grown up teaching everyone around me how to use computers, and went to college for IT. Everything on my network works flawlessly at all times, except for this evil laptop. The worst part is that I fixed this once before a few years back and recently had to replace the hard drive and re-install the OS, but cannot for the life of me remember what I did to make this problem go away. I am in my browser, connected to the Internet. I click a link. Suddenly no internet access. All I do is click down on the WiFi connection in the task bar, disconnect and reconnect immediately. Internet is back the moment I hit "connect." I have read many people had the same issues as I am having, but they all had triggers or other network issues. I have no trigger (this happens literally five to six times per minute no matter what I am doing) and I have no problems with my router, modem, or any other devices or computers on said network. As I am a web designer, and like to test my work live at every turn- this is going to result in this laptop being in pieces if I can't get it fixed soon. If more info is needed, let me know and I will provide. Thanks for any help offered! EDIT: Network Card: Realtek RTL8188CE Wireless LAN 802.11n PCI-E NIC Network state reads as "No Internet Access" when the problem first occurs, then magically I have Internet access for about ten seconds once I disconnect and reconnect. I have turned off IPV6, I have turned off power saving options for the network adapter, no viruses. Any new ideas? Also, I had to disconnect and reconnect four times just to get to this edit screen- and will likely have to do the same just to post it- it's that bad.

    Read the article

  • Cloudify: bootstrap-localcloud: operation failed?

    - by quanta
    OS: Gentoo, CentOS Version: 2.1.0 Follow the quick start guide, I got the below error when running bootstrap-localcloud: cloudify@default> bootstrap-localcloud STARTING CLOUDIFY MANAGEMENT 2012-05-30 14:55:50,396 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; \ Caused by: org.cloudifysource.shell.commands.CLIException: \ Error while starting agent. \ Please make sure that another agent is not already running. Operation failed. What port Cloudify is using to check that agent is running? PS: it's working fine when running on Windows. UPDATE: Wed May 30 22:37:30 ICT 2012 Reply to @tamirkorem and @Itai Frenkel: I'm pretty sure because this is the first time I run that command on 2 servers. More clearly, here're the output: cloudify@default> teardown-localcloud Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? 2012-05-30 22:43:33,145 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown failed. Failed to fetch the currently deployed applications list. For force teardown use the -force flag. Operation failed. cloudify@default> teardown-localcloud -force Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? Failed to fetch the currently deployed applications list. Continuing teardown-localcloud. .2012-05-30 22:46:39,040 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown aborted, an agent was not found on the local machine. Operation failed. and this one is the detailed result: cloudify@default> bootstrap-localcloud --verbose NIC Address=127.0.0.1 Lookup Locators=127.0.0.1:4172 Lookup Groups=localcloud Starting agent and management processes: gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 STARTING CLOUDIFY MANAGEMENT 2012-05-30 22:36:12,870 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; Caused by: org.cloudifysource.shell.commands.CLIException: Error while starting agent. Please make sure that another agent is not already running. Command executed: /usr/local/src/gigaspaces-cloudify-2.1.0-ga/bin/gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 Reply to @Eliran Malka: there is no such process listening on port 4172: # netstat --protocol=inet -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2363/tor tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2331/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2293/cupsd

    Read the article

  • LVS Configuration issue (Using piranha Tool)

    - by PravinG
    I have configured LVS on cent os using piranha tool .I am using vip of internal n/w as gateway for real server we have two NIC one having exteranl Ip and other for internal n/w which is on 192.168.3.0/24 network. But I am not able to connect from client it shows connection refused error . Please suggest iptables rules for private n public n/w to communicate. May be I am missing this . Iptables rules that we have added are : iptables -t nat -A POSTROUTING -p tcp -s 192.168.3.0/24 --sport 5000 -j MASQUERADE this is my ipconfig: eth0 Link encap:Ethernet HWaddr 00:00:E8:F6:74:DA inet addr:122.166.233.133 Bcast:122.166.233.255 Mask:255.255.255.0 inet6 addr: fe80::200:e8ff:fef6:74da/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:94433 errors:0 dropped:0 overruns:0 frame:0 TX packets:130966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9469972 (9.0 MiB) TX bytes:19929308 (19.0 MiB) Interrupt:16 Base address:0x2000 eth0:1 Link encap:Ethernet HWaddr 00:00:E8:F6:74:DA inet addr:122.166.233.136 Bcast:122.166.233.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 00:E0:20:14:F9:2D inet addr:192.168.3.1 Bcast:192.168.3.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:20ff:fe14:f92d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:123718 errors:0 dropped:0 overruns:0 frame:0 TX packets:148856 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18738556 (17.8 MiB) TX bytes:11697153 (11.1 MiB) Interrupt:17 Memory:60000400-600004ff eth1:1 Link encap:Ethernet HWaddr 00:E0:20:14:F9:2D inet addr:192.168.3.10 Bcast:192.168.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:17 Memory:60000400-600004ff eth2 Link encap:Ethernet HWaddr 00:16:76:6E:D1:D2 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:21 Base address:0xa500 and ipvsadm -ln command [root@abts-kk-static-133 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 122.166.233.136:5000 wlc TCP 122.166.233.136:5004 wlc lvs server routing table Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 122.166.233.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 eth1 0.0.0.0 122.166.233.1 0.0.0.0 UG 0 0 0 eth0 real 1 real 2 we have configured various ports from 5000:5008 . Do we need to this iptables for all ports? Suggest me how should I solve this issue.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >