Search Results

Search found 5516 results on 221 pages for 'scope identity'.

Page 53/221 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

  • Assigning IPs to OpenVZ containers

    - by Vojtech
    I have recently bought myself a physical server and I am trying to create containers which would have their IPs. The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows: # vzctl set 101 --ipadd 144.76.195.252 --save I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well. This is ifconfig of the physical machine: eth0 Link encap:Ethernet HWaddr d4:3d:7e:ec:e0:04 inet addr:144.76.195.232 Bcast:144.76.195.255 Mask:255.255.255.224 inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217895 errors:0 dropped:0 overruns:0 frame:0 TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:322481419 (307.5 MiB) TX bytes:1672628 (1.5 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) This is ifconfig of the OpenVZ container: # ifconfig venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:144.76.195.252 P-t-P:144.76.195.252 Bcast:144.76.195.252 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 What do I need to do to have the container accessible from the outside world? What could I have forgotten? Thanks.

    Read the article

  • Password-less login into localhost

    - by Brad
    I am trying to setup password-less login into my localhost because it's required for a tutorial. I went through the normal steps of generating an rsa key and appending the public key to authorized_keys but I am still prompted for a password. I've also enabled RSAAuthentication and PubKeyAuthentication in /etc/ssh_config. Following other suggestions I've seen, I tried: chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys But the problem persists. Here is the output from ssh -v localhost: (tutorial)bnels21-2:tutorial bnels21$ ssh -v localhost OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /Users/bnels21/.ssh/id_rsa type 1 debug1: identity file /Users/bnels21/.ssh/id_rsa-cert type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 1c:31:0e:56:93:45:dc:f0:77:6c:bd:90:27:3b:c6:43 debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/bnels21/.ssh/known_hosts:11 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/bnels21/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering RSA public key: id_rsa3 debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/bnels21/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Any suggestions? I'm running OSX 10.8.

    Read the article

  • Windows 2008 R2 DHCP Overlapping Scopes

    - by Buska
    We are trying to troubleshoot a scope overlap problem. We have multiple device types we wish to give all different ranges of a 16 bit subnet. IE. X device we wish to give 192.168.2.1-192.168.2.254/16, Y devices we wish to give 192.168.3.1-192.168.3.254/16. We are trying to accomplish this by creating different scopes and using the 60 class identifier. The problem is DHCP won't allow us to give these scopes with 16 bit masks because of the potential overlap. We aren't overlapping the address pool so why does DHCP care and can we work around this? If this isn't possible, how can i assign specific ranges by device type without creating multiple scopes? Any thoughts would be helpful. UPDATE: Entire Scope is 192.168.0.0/16 Gateway is 192.168.1.1/16 Device Hardware A - 192.168.20.1-192.168.20.254/16 Device Hardware B - 192.168.26.1-192.168.26.254/16 Device Hardware C - 192.168.85.1-192.168.85.254/16 We tried to setup multiple scopes for each device type (A,B,C) but couldn't specify a 16 bit mask as Scope A could technically overlap Scope B even thought our start and end addresses don't. I hope this makes more sense. Thanks for your thoughts.

    Read the article

  • how to get ip address of a PPP(Point-to-Point Protocol) network interface?

    - by Xsmael
    I have a Linux machine with two network interfaces, and I'd like to get the IP address of the PPP interface w1g1 but it doesn't show up in ifconfig. There is a public IP on the PPP interface, but there is no internet connection, I'm trying to troubleshoot but I need to get the IP address of the interface and I can't. ifconfig : eth0 Link encap:Ethernet HWaddr 00:30:48:8D:F0:2C inet addr:192.168.2.254 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fe8d:f02c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9970 errors:0 dropped:567 overruns:0 frame:0 TX packets:4338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1441024 (1.3 MiB) TX bytes:915814 (894.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:675 errors:0 dropped:0 overruns:0 frame:0 TX packets:675 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50659 (49.4 KiB) TX bytes:50659 (49.4 KiB) w1g1 Link encap:Point-to-Point Protocol UP POINTOPOINT RUNNING NOARP MTU:240 Metric:1 RX packets:748994 errors:0 dropped:0 overruns:0 frame:0 TX packets:748992 errors:0 dropped:0 overruns:0 carrier:3 collisions:0 txqueuelen:100 RX bytes:179758560 (171.4 MiB) TX bytes:179758080 (171.4 MiB) Interrupt:177 Memory:f881c400-f881e3ff w1g1 is connected to a modem by an RJ45<-Serial cable and the modem is connected to the phone line. The modem is a NOKIA DNT2Mi you can see it here Routing table : 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 169.254.0.0/16 dev eth0 scope link default via 192.168.2.180 dev eth0

    Read the article

  • Maven2 multi-module ejb 3.1 project - deployment error

    - by gerry
    The problem is taht I get the following error qhile deploying my project to Glassfish: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging But, let us start on how the project structure looks like in Maven2... I've build the following scenario: MultiModuleJavaEEProject - parent module - model --- packaged as jar - ejb1 ---- packaged as ebj - ejb2 ---- packaged as ebj - web ---- packaged as war So model, ejb1, ejb2 and web are children/modules of the parent MultiModuleJavaEEProject. _ejb1 depends on model. _ejb2 depends on ejb1. _web depends on ejb2. the pom's look like: _parent: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.dyndns.geraldhuber.testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <packaging>pom</packaging> <version>1.0</version> <name>MultiModuleJavaEEProject</name> <url>http://maven.apache.org</url> <modules> <module>model</module> <module>ejb1</module> <module>ejb2</module> <module>web</module> </modules> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <version>2.2</version> <configuration> <ejbVersion>3.1</ejbVersion> <jarName>${project.groupId}.${project.artifactId}-${project.version}</jarName> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> _model: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>model</artifactId> <packaging>jar</packaging> <version>1.0</version> <name>model</name> <url>http://maven.apache.org</url> </project> _ejb1: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb1</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb1</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>model</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _ejb2: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb2</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb2</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb1</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _web: <?xml version="1.0" encoding="UTF-8"?> <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>MultiModuleJavaEEProject</artifactId> <groupId>testing</groupId> <version>1.0</version> </parent> <groupId>testing</groupId> <artifactId>web</artifactId> <version>1.0</version> <packaging>war</packaging> <name>web Maven Webapp</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb2</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.0</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> </manifest> </archive> </configuration> </plugin> </plugins> <finalName>web</finalName> </build> </project> And the model is just a simple Pojo: package testing.model; public class Data { private String data; public String getData() { return data; } public void setData(String data) { this.data = data; } } And the ejb1 contains only one STATELESS ejb. package testing.ejb1; import javax.ejb.Stateless; import testing.model.Data; @Stateless public class DataService { private Data data; public DataService(){ data = new Data(); data.setData("Hello World!"); } public String getDataText(){ return data.getData(); } } As well as the ejb2 is only a stateless ejb: package testing.ejb2; import javax.ejb.EJB; import javax.ejb.Stateless; import testing.ejb1.DataService; @Stateless public class Service { @EJB DataService service; public Service(){ } public String getText(){ return service.getDataText(); } } And the web module contains only a Servlet: package testing.web; import java.io.IOException; import java.io.PrintWriter; import javax.ejb.EJB; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import testing.ejb2.Service; public class SimpleServlet extends HttpServlet { @EJB Service service; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println( "SimpleServlet Executed" ); out.println( "Text: "+service.getText() ); out.flush(); out.close(); } } And the web.xml file in the web module looks like: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <display-name>Archetype Created Web Application</display-name> <servlet> <servlet-name>simple</servlet-name> <servlet-class>testing.web.SimpleServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>simple</servlet-name> <url-pattern>/simple</url-pattern> </servlet-mapping> </web-app> So no further files are set up by me. There is no ejb-jar.xml in any ejb files, because I'm using EJB 3.1. So I think ejb-jar.xml descriptors are optional. I this right? But the problem is, the already mentioned error: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging Can anybody help?

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • XSLT big integer (int64) handling msxml

    - by Farid Z
    When trying to do math on an big integer (int64) large number in xslt template I get the wrong result since there is no native 64-bit integer support in xslt (xslt number is 64-bit double). I am using msxml 6.0 on Windows XP SP3. Are there any work around for this on Windows? <tables> <table> <table_schem>REPADMIN</table_schem> <table_name>TEST_DESCEND_IDENTITY_BIGINT</table_name> <column> <col_name>COL1</col_name> <identity> <col_min_val>9223372036854775805</col_min_val> <col_max_val>9223372036854775805</col_max_val> <autoincrementvalue>9223372036854775807</autoincrementvalue> <autoincrementstart>9223372036854775807</autoincrementstart> <autoincrementinc>-1</autoincrementinc> </identity> </column> </table> </tables> This test returns true due to overflow (I am assuming) but actually is false if I could tell the xslt processor somehow to use int64 rather than the default 64-bit double for the data since big integer is the actual data type for the numbers in the xml input. <xsl:when test="autoincrementvalue = (col_min_val + autoincrementinc)"> <xsl:value-of select="''"/> </xsl:when> here is the complete template <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <!--Reseed Derby identity column--> <xsl:output omit-xml-declaration='yes' method='text' /> <xsl:param name="stmtsep">;</xsl:param> <xsl:param name="schemprefix"></xsl:param> <xsl:template match="tables"> <xsl:variable name="identitycount" select="count(table/column/identity)"></xsl:variable> <xsl:for-each select="table/column/identity"> <xsl:variable name="table_schem" select="../../table_schem"></xsl:variable> <xsl:variable name="table_name" select="../../table_name"></xsl:variable> <xsl:variable name="tablespec"> <xsl:if test="$schemprefix"> <xsl:value-of select="$table_schem"/>.</xsl:if><xsl:value-of select="$table_name"/></xsl:variable> <xsl:variable name="col_name" select="../col_name"></xsl:variable> <xsl:variable name="newstart"> <xsl:choose> <xsl:when test="autoincrementinc > 0"> <xsl:choose> <xsl:when test="col_max_val = '' and autoincrementvalue = autoincrementstart"> <xsl:value-of select="''"/> </xsl:when> <xsl:when test="col_max_val = ''"> <xsl:value-of select="autoincrementstart"/> </xsl:when> <xsl:when test="autoincrementvalue = (col_max_val + autoincrementinc)"> <xsl:value-of select="''"/> </xsl:when> <xsl:when test="(col_max_val + autoincrementinc) &lt; autoincrementstart"> <xsl:value-of select="autoincrementstart"/> </xsl:when> <xsl:otherwise> <xsl:value-of select="col_max_val + autoincrementinc"/> </xsl:otherwise> </xsl:choose> </xsl:when> <xsl:when test="autoincrementinc &lt; 0"> <xsl:choose> <xsl:when test="col_min_val = '' and autoincrementvalue = autoincrementstart"> <xsl:value-of select="''"/> </xsl:when> <xsl:when test="col_min_val = ''"> <xsl:value-of select="autoincrementstart"/> </xsl:when> <xsl:when test="autoincrementvalue = (col_min_val + autoincrementinc)"> <xsl:value-of select="''"/> </xsl:when> <xsl:when test="(col_min_val + autoincrementinc) > autoincrementstart"> <xsl:value-of select="autoincrementstart"/> </xsl:when> <xsl:otherwise> <xsl:value-of select="col_min_val + autoincrementinc"/> </xsl:otherwise> </xsl:choose> </xsl:when> </xsl:choose> </xsl:variable> <xsl:if test="not(position()=1)"><xsl:text> </xsl:text></xsl:if> <xsl:choose> <!--restart with ddl changes both the next identity value AUTOINCREMENTVALUE and the identity start number AUTOINCREMENTSTART eventhough in this casewe only want to change only the next identity number--> <xsl:when test="$newstart != '' and $newstart != autoincrementvalue">alter table <xsl:value-of select="$tablespec"/> alter column <xsl:value-of select="$col_name"/> restart with <xsl:value-of select="$newstart"/><xsl:if test="$identitycount>1">;</xsl:if></xsl:when> <xsl:otherwise>-- reseed <xsl:value-of select="$tablespec"/> is not necessary</xsl:otherwise> </xsl:choose> </xsl:for-each> </xsl:template> </xsl:stylesheet>

    Read the article

  • Hibernate, Spring and SLF4J Binding

    - by asrijaal
    Hi, I'm trying to setup a webapp with maven2 managed dependcies. Here my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>mtx-production</groupId> <artifactId>mtx-production</artifactId> <version>0.0.1</version> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.0.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>3.0.2.RELEASE</version> <type>jar</type> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.0.2.RELEASE</version> <type>jar</type> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.12</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>3.5.1-Final</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>3.5.1-Final</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.8</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.14</version> <type>jar</type> <scope>compile</scope> </dependency> </dependencies> <repositories> <repository> <id>jboss-releases</id> <url>http://repository.jboss.org/maven2</url> </repository> </repositories> </project> Correct me if I got something wrong but this should work?! I only getting in my eclipse this exception 13.06.2010 16:48:15 org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'dataSource' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. 13.06.2010 16:48:15 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons INFO: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@366573: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,dataSource,sessionFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,myTxManager,org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0]; root of factory hierarchy 13.06.2010 16:48:15 org.springframework.web.context.ContextLoader initWebApplicationContext SCHWERWIEGEND: Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:687) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:408) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3830) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4337) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:566) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1412) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:387) at org.springframework.beans.factory.BeanFactoryUtils.beansOfTypeIncludingAncestors(BeanFactoryUtils.java:266) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.detectPersistenceExceptionTranslators(PersistenceExceptionTranslationInterceptor.java:139) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.<init>(PersistenceExceptionTranslationInterceptor.java:79) at org.springframework.dao.annotation.PersistenceExceptionTranslationAdvisor.<init>(PersistenceExceptionTranslationAdvisor.java:70) at org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setBeanFactory(PersistenceExceptionTranslationPostProcessor.java:99) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1431) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 25 more Caused by: java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder What is wrong with my setup?

    Read the article

  • How-to configure Spring Social via XML

    - by Matthias Steiner
    I spend a few hours trying to get Twitter integration to work with Spring Social using the XML configuration approach. All the examples I could find on the web (and on stackoverflow) always use the @Config approach as shown in the samples For whatever reason the bean definition to get an instance to the twitter API throws an AOP exception: Caused by: java.lang.IllegalStateException: Cannot create scoped proxy for bean 'scopedTarget.twitter': Target type could not be determined at the time of proxy creation. Here's the complete config file I have: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxrs="http://cxf.apache.org/jaxrs" xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util" xmlns:cxf="http://cxf.apache.org/core" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.1.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.1.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd"> <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/DefaultDB" /> <!-- initialize DB required to store user auth tokens --> <jdbc:initialize-database data-source="dataSource" ignore-failures="ALL"> <jdbc:script location="classpath:/org/springframework/social/connect/jdbc/JdbcUsersConnectionRepository.sql"/> </jdbc:initialize-database> <bean id="connectionFactoryLocator" class="org.springframework.social.connect.support.ConnectionFactoryRegistry"> <property name="connectionFactories"> <list> <ref bean="twitterConnectFactory" /> </list> </property> </bean> <bean id="twitterConnectFactory" class="org.springframework.social.twitter.connect.TwitterConnectionFactory"> <constructor-arg value="xyz" /> <constructor-arg value="xzy" /> </bean> <bean id="usersConnectionRepository" class="org.springframework.social.connect.jdbc.JdbcUsersConnectionRepository"> <constructor-arg ref="dataSource" /> <constructor-arg ref="connectionFactoryLocator" /> <constructor-arg ref="textEncryptor" /> </bean> <bean id="connectionRepository" factory-method="createConnectionRepository" factory-bean="usersConnectionRepository" scope="request"> <constructor-arg value="#{request.userPrincipal.name}" /> <aop:scoped-proxy proxy-target-class="false" /> </bean> <bean id="twitter" factory-method="?ndPrimaryConnection" factory-bean="connectionRepository" scope="request" depends-on="connectionRepository"> <constructor-arg value="org.springframework.social.twitter.api.Twitter" /> <aop:scoped-proxy proxy-target-class="false" /> </bean> <bean id="textEncryptor" class="org.springframework.security.crypto.encrypt.Encryptors" factory-method="noOpText" /> <bean id="connectController" class="org.springframework.social.connect.web.ConnectController"> <constructor-arg ref="connectionFactoryLocator"/> <constructor-arg ref="connectionRepository"/> <property name="applicationUrl" value="https://socialscn.int.netweaver.ondemand.com/socialspringdemo" /> </bean> <bean id="signInAdapter" class="com.sap.netweaver.cloud.demo.social.SimpleSignInAdapter" /> </beans> What puzzles me is that the connectionRepositoryinstantiation works perfectly fine (I commented-out the twitter bean and tested the code!) ?!? It uses the same features: request scope and interface AOP proxy and works, but the twitter bean instantiation fails ?!? The spring social config code looks as follows (I can not see any differences, can you?): @Configuration public class SocialConfig { @Inject private Environment environment; @Inject private DataSource dataSource; @Bean @Scope(value="singleton", proxyMode=ScopedProxyMode.INTERFACES) public ConnectionFactoryLocator connectionFactoryLocator() { ConnectionFactoryRegistry registry = new ConnectionFactoryRegistry(); registry.addConnectionFactory(new TwitterConnectionFactory(environment.getProperty("twitter.consumerKey"), environment.getProperty("twitter.consumerSecret"))); return registry; } @Bean @Scope(value="singleton", proxyMode=ScopedProxyMode.INTERFACES) public UsersConnectionRepository usersConnectionRepository() { return new JdbcUsersConnectionRepository(dataSource, connectionFactoryLocator(), Encryptors.noOpText()); } @Bean @Scope(value="request", proxyMode=ScopedProxyMode.INTERFACES) public ConnectionRepository connectionRepository() { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); if (authentication == null) { throw new IllegalStateException("Unable to get a ConnectionRepository: no user signed in"); } return usersConnectionRepository().createConnectionRepository(authentication.getName()); } @Bean @Scope(value="request", proxyMode=ScopedProxyMode.INTERFACES) public Twitter twitter() { Connection<Twitter> twitter = connectionRepository().findPrimaryConnection(Twitter.class); return twitter != null ? twitter.getApi() : new TwitterTemplate(); } @Bean public ConnectController connectController() { ConnectController connectController = new ConnectController(connectionFactoryLocator(), connectionRepository()); connectController.addInterceptor(new PostToWallAfterConnectInterceptor()); connectController.addInterceptor(new TweetAfterConnectInterceptor()); return connectController; } @Bean public ProviderSignInController providerSignInController(RequestCache requestCache) { return new ProviderSignInController(connectionFactoryLocator(), usersConnectionRepository(), new SimpleSignInAdapter(requestCache)); } } Any help/pointers would be appreciated!!! Cheers, Matthias

    Read the article

  • DHCP Relay V DHCP Local Cisco v 3com

    - by DefSol
    Howdy, I have a client who has a WAN with 7 sites. At one site in particular, randomly about 4-5 clients do not get an IP address. The local gateway is a cisco 871 and relay's to a windows server in a Data Center running a valid scope for the subnet. If I put in a cisco 1800 and configure a dhcp scope (disabling the scope on the server) all clients get an ip address and everything is right with the world. The Wan providing keeps saying it's a local issue although we can work around with the 1800. The provider says a 3Com switch is at fault and the 1800 does not have a local switch, and because the 871 does, means the internal switching will receive a different uplink policy. The 3Com is the only managed switch in the subnet. Any ideas greatly appreciated. Reuben

    Read the article

  • DHCP Relay V DHCP Local Cisoc v 3com

    - by DefSol
    Howdy, I have a client who has a WAN with 7 sites. At one site in particular, randomly about 4-5 clients do not get an IP address. The local gateway is a cisco 871 and relay's to a windows server in a Data Center running a valid scope for the subnet. If I put in a cisco 1800 and configure a dhcp scope (disabling the scope on the server) all clients get an ip address and everything is right with the world. The Wan providing keeps saying it's a local issue although we can work around with the 1800. The provider says a 3Com switch is at fault and the 1800 does not have a local switch, and because the 871 does, means the internal switching will receive a different uplink policy. The 3Com is the only managed switch in the subnet. Any ideas greatly appreciated. Reuben

    Read the article

  • win2008 r2 IIS7.5 - setting up a custom user for an application pool, and trust issues

    - by Ken Egozi
    Scenario: blank win2008 r2 install the goal was to have a couple of sites running with isolated pool and dedicated users A new folder for a new website - c:\web\siteA\wwwroot, with the app (asp.net) deployed there in the /bin folder created a user named "appuser" and added it to the IIS_USERS group gave the website folder read and execute permissions for IIS_USERS and the appuser created the IIS site. set the app=pool identity to the appuser now I'm getting YSOD telling me that the trust-level is too low - SecurityException: That assembly does not allow partially trusted callers Added <trust level="Full" /> on the web-config, did not help changing the app-pool user to Administrator makes the site run Setting "anonymous user identity" to either IUSR or the app pool identity makes no difference any idea? is there a "step by step" howto guide for setting up users for isolated app pools on IIS7.5?

    Read the article

  • Exchange 2010 PST-Export fails

    - by Chake
    I'm horribly failing at exporting Exchnange Mailboxes to PST files. Perhaps You are able to help me? The System I'm running some legacy machines here. The one I'm currently working on (CurrentDC) is a Windows 2008 R2 Server with Exchange 2010 on it. Exchange seems to be poorly patched: [PS] C:\>get-exchangeserver Name Site ServerRole Edition AdminDisplayVersion ---- ---- ---------- ------- ------------------- OldDC None Enterprise Version 6.5 (Bui... CurrentDC company.local Mailbox,... Enterprise Version 14.0 (Bu... The Problem After some trouble I managed to get the Export-Mailbox command run: [PS] C:\>Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport According to several Websites that seems to be the right command to export the mailbox of the user "marco" to "C:\ExchangeExport". But after running the command an error occurs (I'm sorry, it is the german version of Windows 2008 - but if you translate Fehler with error and Vorgang with process you should be prepared enough to go ;)) [PS] C:\Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport Fehler für Marco S ([email protected]). Ursache: Fehler bei diesem Vorgang., Fehlercode: -2147467259. + CategoryInfo : InvalidOperation: (0:Int32) [Export-Mailbox], RecipientTaskException + FullyQualifiedErrorId : 2317FD3A,Microsoft.Exchange.Management.RecipientTasks.ExportMailbox RunspaceId : 44415363-371e-44a1-a682-61e6a9b90c86 Identity : company.local/Company User/Marco S DistinguishedName : CN=Marco S,OU=Company User,DC=company,DC=local DisplayName : Marco S Alias : marco LegacyExchangeDN : /o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco PrimarySmtpAddress : [email protected] SourceServer : CurrentDC.company.local SourceDatabase : Mailbox Database 0279110169 SourceGlobalCatalog : CurrentDC SourceDomainController : TargetGlobalCatalog : CurrentDC TargetDomainController : TargetMailbox : TargetServer : TargetDatabase : MailboxSize : 0 B (0 bytes) IsResourceMailbox : False SIDUsedInMatch : SMTPProxies : SourceManager : SourceDirectReports : SourcePublicDelegates : SourcePublicDelegatesBL : SourceAltRecipient : SourceAltRecipientBL : SourceDeliverAndRedirect : MatchedTargetNTAccountDN : IsMatchedNTAccountMailboxEnabled : MatchedContactsDNList : TargetNTAccountDNToCreate : TargetManager : TargetDirectReports : TargetPublicDelegates : TargetPublicDelegatesBL : TargetAltRecipient : TargetAltRecipientBL : TargetDeliverAndRedirect : Options : Default SourceForestCredential : TargetForestCredential : TargetFolder : PSTFilePath : C:\ExchangeExport\marco.pst RecoveryMailboxGuid : RecoveryMailboxLegacyExchangeDN : RecoveryMailboxDisplayName : RecoveryDatabaseGuid : StandardMessagesDeleted : 0 AssociatedMessagesDeleted : 0 DumpsterMessagesDeleted : 0 MoveType : ExportToPST MoveStage : Validation StartTime : 05.10.2012 13:55:46 EndTime : 05.10.2012 13:55:46 StatusCode : -2147467259 StatusMessage : Fehler bei diesem Vorgang. ReportFile : C:\Program Files\Microsoft\Exchange Server\V14\Logging\MigrationLogs\export-Mailbox20121005-135545-8170000.xml ServerName : CurrentDC.company.local What I have done Well, I must say I'm quite clueless. I was wondering why MailboxSize is 0 - so I checked it: [PS] C:\>Get-MailboxStatistics marco | ft DisplayName, TotalItemSize, ItemCount DisplayName TotalItemSize ItemCount ----------- ------------- --------- Marco S 473 MB (496,011,572 bytes) 4173 Well, this i not 0 bytes - but I don't know what to do with this information. Also I had a look at the ReportFile mentioned in the output: <?xml version="1.0"?> <export-Mailbox> <TaskHeader> <RunningAs>NT-AUTORITÄT\SYSTEM</RunningAs> <Name>export-Mailbox</Name> <Type>ExportToPST</Type> <MaxBadItems>0</MaxBadItems> <Version>14.0.639.21</Version> <StartTime>10.05.2012 14:19:12</StartTime> <Options Identity="marco" PSTFolderPath="C:\ExchangeExport" DeleteContent="False" DeleteAssociatedMessages="False" GlobalCatalog="CurrentDC" MaxThreads="4" BadItemLimit="0" ValidateOnly="False" IncludeFolders="" ExcludeFolders="" StartDate="01.01.0001 00:00:00" EndDate="31.12.9999 23:59:59" SubjectKeywords="" ContentKeywords="" AllContentKeywords="" AttachmentFilenames="" SenderKeywords="" RecipientKeywords="" Locale="" /> </TaskHeader> <TaskDetails> <Item MailboxName="Marco S"> <Source> <Identity>company.local/Company User/Marco S</Identity> <DistinguishName>CN=Marco Sc,OU=Company User,DC=company,DC=local</DistinguishName> <DisplayName>Marco S</DisplayName> <Alias>marco</Alias> <LegacyExchangeDN>/o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco</LegacyExchangeDN> <PrimarySmtpAddress>[email protected]</PrimarySmtpAddress> <SourceServer>CurrentDC.company.local</SourceServer> <SourceDatabase>Mailbox Database 0279110169</SourceDatabase> <IsResourceMailbox>False</IsResourceMailbox> <SourceGlobalCatalog>CurrentDC</SourceGlobalCatalog> </Source> <Target> <PSTFilePath>C:\ExchangeExport\marco.pst</PSTFilePath> </Target> <MailboxSize>0 B (0 bytes)</MailboxSize> <Duration>00:00:00</Duration> <Result IsWarning="False" ErrorCode="-2147467259">Fehler bei diesem Vorgang.</Result> </Item> </TaskDetails> <TaskFooter> <EndTime>10.05.2012 14:19:13</EndTime> <TotalSize>0 B (0 bytes)</TotalSize> <StandardMessagesDeleted>0</StandardMessagesDeleted> <AssociatedMessagesDeleted>0</AssociatedMessagesDeleted> <DumpsterMessagesDeleted>0</DumpsterMessagesDeleted> <Result ErrorCount="1" CompletedCount="0" WarningCount="0" /> </TaskFooter> </export-Mailbox> Do you have any clue? <UPDATE> Regarding to the answer from downthepub I tried to use UNC paths - no change. Also I tried installing the management tools to a client and run the scripts from there - no way, too. </UPDATE> Thanks a lot for reading this mess!

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • User cannot book a room resource in Exchange 2010

    - by bakesale
    Im having a problem with a specific user not being able to book a room resource. No error message is generated at all. The user attempts to add the room as a meeting resource and it simply doesnt register with the resources calendar. I have tested the resource with other accounts and other rooms for this user but it is only this room that is having issues for this user. This Exchange server was migrated from 2007 to 2010. All rooms except for the problem room were created back on Exchange 2007. This new room was created with Exchange 2010. Maybe this clue has something to do with the problem? This is a paste of the mailboxes folder permissions [PS] C:\Windows\system32Get-MailboxFolderPermission -identity room125:\calendar RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Default AccessRights : {Author} Identity : Default IsValid : True RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Anonymous AccessRights : {None} Identity : Anonymous IsValid : True

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >