Search Results

Search found 10595 results on 424 pages for 'job definition'.

Page 380/424 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

  • MySQL won't start, reinstall fails on Ubuntu 12.04

    - by Evils
    My problem started yesterday night when I tried to change the my.cnf config on my ubuntu 12.04 x64 System. I simply tried to changed the bind-address parameter from 127.0.0.1 to 0.0.0.0. A simple restart after a reboot gave this error: stop: Unknown instance: start: Job failed to start I tried to start mysql then by using 'mysqld' which outputs this: 130701 11:05:59 [Note] Plugin 'FEDERATED' is disabled. mysqld: Table 'mysql.plugin' doesn't exist 130701 11:05:59 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 130701 11:05:59 InnoDB: The InnoDB memory heap is disabled 130701 11:05:59 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130701 11:05:59 InnoDB: Compressed tables use zlib 1.2.3.4 130701 11:05:59 InnoDB: Initializing buffer pool, size = 128.0M 130701 11:05:59 InnoDB: Completed initialization of buffer pool 130701 11:05:59 InnoDB: highest supported file format is Barracuda. 130701 11:05:59 InnoDB: Waiting for the background threads to start 130701 11:06:00 InnoDB: 5.5.31 started; log sequence number 1595675 130701 11:06:00 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130701 11:06:00 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130701 11:06:00 [Note] Server socket created on IP: '127.0.0.1'. 130701 11:06:00 [ERROR] Can't start server : Bind on unix socket: Permission denied 130701 11:06:00 [ERROR] Do you already have another mysqld server running on socket: /var/run/mysqld/mysqld.sock ? 130701 11:06:00 [ERROR] Aborting 130701 11:06:00 InnoDB: Starting shutdown... 130701 11:06:00 InnoDB: Shutdown completed; log sequence number 1595675 130701 11:06:00 [Note] mysqld: Shutdown complete Meanwhile I already tried to reinstall and purge the complete mysql package which results in another error which says that dpkg cant change the admins password. While this error appeared another error came with it. When trying to install something new with apt, it always says 'fopen: permission denied' right after it tries to update my man-db. This is my dmesg output: [ 6879.687998] type=1400 audit(1372669683.397:36): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=9336 comm="apparmor_parser" [ 6881.323215] init: mysql main process (9340) terminated with status 1 [ 6881.323316] init: mysql respawning too fast, stopped Any help will be appreciated as this is a productive server which renders useless without mysql.

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • No mapping found for HTTP request with URI: in a Spring MVC app

    - by Ravi
    Hello All, I'm getting this error. my web.xml has this <servlet> <servlet-name>springweb</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/web-application-config.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>springweb</servlet-name> <url-pattern>/app/*</url-pattern> </servlet-mapping> I have this in my web-application-config.xml <bean id="viewResolver" class="org.springframework.web.servlet.view.UrlBasedViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> </bean> <bean name="/Scheduling.htm" class="com.web.SchedulingController"/> my com.web.SchedulingController looks like this /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package com.web; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.web.servlet.ModelAndView; import org.springframework.web.servlet.mvc.Controller; public class SchedulingController implements Controller{ public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView modelAndView = new ModelAndView("/jsp/Scheduling_main.jsp"); modelAndView.addObject("message","Hello World MVC!!"); return modelAndView; } } When I hit this controller with the URL http://localhost:8080/project1/app/Scheduling.htm The Scheduling_main.jsp gets displayed but the images are not displayed properly. Also the js and css file are not getting rendered. I'm accessing the images like this <img src="jquerylib/images/save_32x32.png" title="Save Appointment"> If I change the URL mapping in the servlet definition to *.htm, the images get displayed fine. Can you point out where I'm missing out. Here is complete error message WARN [PageNotFound] No mapping found for HTTP request with URI [/mavenproject1/app/jquerylib/images/save_32x32.png] in DispatcherServlet with name 'springweb' Thanks a lot. Ravi

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

  • UniversalConnectionPoolManagerMBean already registered.

    - by Vladimir Bezugliy
    I have two web applications. Both of yhem use oracle.ucp.UniversalConnectionPool. When I deploy these applications on JBoss I get following exception: java.sql.SQLException: Unable to start the Universal Connection Pool: java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: MBean exception occurred while registering or unregistering the MBean at oracle.ucp.util.UCPErrorHandler.newSQLException(UCPErrorHandler.java:541) at oracle.ucp.jdbc.PoolDataSourceImpl.throwSQLException(PoolDataSourceImpl.java:588) at oracle.ucp.jdbc.PoolDataSourceImpl.startPool(PoolDataSourceImpl.java:277) at oracle.ucp.jdbc.PoolDataSourceImpl.getConnection(PoolDataSourceImpl.java:647) at oracle.ucp.jdbc.PoolDataSourceImpl.getConnection(PoolDataSourceImpl.java:614) at oracle.ucp.jdbc.PoolDataSourceImpl.getConnection(PoolDataSourceImpl.java:608) at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy.afterPropertiesSet(LazyConnectionDataSourceProxy.java:163) ... Caused by: java.sql.SQLException: Unable to start the Universal Connection Pool: oracle.ucp.UniversalConnectionPoolException: MBean exception occurred while registering or unregistering the MBean at oracle.ucp.util.UCPErrorHandler.newSQLException(UCPErrorHandler.java:541) at oracle.ucp.jdbc.PoolDataSourceImpl.throwSQLException(PoolDataSourceImpl.java:588) at oracle.ucp.jdbc.PoolDataSourceImpl.startPool(PoolDataSourceImpl.java:248) ... 212 more Caused by: oracle.ucp.UniversalConnectionPoolException: MBean exception occurred while registering or unregistering the MBean at oracle.ucp.util.UCPErrorHandler.newUniversalConnectionPoolException(UCPErrorHandler.java:421) at oracle.ucp.util.UCPErrorHandler.newUniversalConnectionPoolException(UCPErrorHandler.java:389) at oracle.ucp.admin.UniversalConnectionPoolManagerMBeanImpl.getUniversalConnectionPoolManagerMBean(UniversalConnectionPoolManagerMBeanImpl.java:148) at oracle.ucp.jdbc.PoolDataSourceImpl.startPool(PoolDataSourceImpl.java:243) ... 212 more Caused by: java.security.PrivilegedActionException: javax.management.InstanceAlreadyExistsException: oracle.ucp.admin:name=UniversalConnectionPoolManagerMBean already registered. at java.security.AccessController.doPrivileged(Native Method) at oracle.ucp.admin.UniversalConnectionPoolManagerMBeanImpl.getUniversalConnectionPoolManagerMBean(UniversalConnectionPoolManagerMBeanImpl.java:135) ... 213 more Caused by: javax.management.InstanceAlreadyExistsException: oracle.ucp.admin:name=UniversalConnectionPoolManagerMBean already registered. at org.jboss.mx.server.registry.BasicMBeanRegistry.add(BasicMBeanRegistry.java:756) at org.jboss.mx.server.registry.BasicMBeanRegistry.registerMBean(BasicMBeanRegistry.java:233) at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:138) at org.jboss.mx.server.Invocation.invoke(Invocation.java:90) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:140) at org.jboss.mx.server.Invocation.invoke(Invocation.java:90) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.mx.server.MBeanServerImpl$3.run(MBeanServerImpl.java:1431) at java.security.AccessController.doPrivileged(Native Method) at org.jboss.mx.server.MBeanServerImpl.registerMBean(MBeanServerImpl.java:1426) at org.jboss.mx.server.MBeanServerImpl.registerMBean(MBeanServerImpl.java:376) at oracle.ucp.admin.UniversalConnectionPoolManagerMBeanImpl$2.run(UniversalConnectionPoolManagerMBeanImpl.java:141) ... 215 more Definition of data source bean: <bean id="oracleDataSource" class="oracle.ucp.jdbc.PoolDataSourceFactory" factory-method="getPoolDataSource"> <!-- hard coded properties --> <property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource" /> <property name="validateConnectionOnBorrow" value="true" /> <property name="connectionPoolName" value="ORACLE_CONNECTION_POOL" /> ... </bean> Does anyone know how to fix this problem?

    Read the article

  • How to override C# DateTime serialization with class auto-generated from wsdl?

    - by Calvin Fisher
    I have a WSDL that the consumer of my web service expects will be adhered to strictly. I converted it into an interface with wsdl.exe and had my web service implement it. Except for this problem, I have been generally pleased with the results. A simple GetCurrentTime method will have the following response class generated from the WSDL in the interface definition: [System.CodeDobmCompiler.GeneratedCodeAttribute("wsdl", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace="[Client Namespace]")] public partial class GetCurrentTimeResponse { private System.DateTime timeStampField; [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified] public System.DateTime TimeStamp{ // [accesses timeStampField] } } When I put the response data into the automatically generated response class, it gets serialized into an appropriate XML response. (Most of the web methods have much more complicated return types with multiple levels of arrays.) The problem is that the default serialization of DateTime objects violates one of the requirements in the WSDL: ... <xsd:simpleType name="SearchTimeStamp"> <xsd:restriction base="xsd:dateTime"> <xsd:pattern value="[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]{1,7})?Z"> </xsd:restriction> </xsd:simpleType> ... Note the last part of the pattern where subseconds must be either 1 or 7 characters if they are included. The client seems to be rejecting the response because it does not match that requirement. The main issue is that when .NET serializes a DateTime object, it omits all trailing zeroes, meaning the resulting subsecond value varies in length. (e.g., "12:34:56.700" gets serialized as "<TimeStamp>12:34:56:7</TimeStamp>" by default). We use millisecond precision, so I need all timestamps to format with 7 subsecond digits in order to be compliant with the WSDL. It would be easy if I could specify a format string, but I'm not sure how to control the string that the DateTime object uses to serialize to XML, or to otherwise override the serialization behavior. How do I do this? Keeping in mind the following... I would like to modify the generated code as little as possible... preferably not at all if the change can be made through a partial class or inherited class. Using an inherited class for the return type of the web method will cause the web service to no longer implement the auto-generated interface. The TimeStamp type occurs in other, more complex response types. So, manually overriding the entire serialization process may be prohibitively time-consuming.

    Read the article

  • Spring mail support - no subject

    - by Trick
    I have updated my libraries, and now e-mails are sent without subject. I don't know where this happened... Mail API is 1.4.3., Spring 2.5.6. and Spring Integration Mail 1.0.3.RELEASE. <!-- Definitions for SMTP server --> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="${mail.host}" /> <property name="username" value="${mail.username}" /> <property name="password" value="${mail.password}" /> </bean> <bean id="adminMailTemplate" class="org.springframework.mail.SimpleMailMessage" > <property name="from" value="${mail.admin.from}" /> <property name="to" value="${mail.admin.to}" /> <property name="cc"> <list> <value>${mail.admin.cc1}</value> </list> </property> </bean> <!-- Mail service definition --> <bean id="mailService" class="net.bbb.core.service.impl.MailServiceImpl"> <property name="sender" ref="mailSender"/> <property name="mail" ref="adminMailTemplate"/> </bean> And properties mail.host,mail.username,mail.password,mail.admin.from,mail.admin.to, mail.admin.cc1. Java class: /** The sender. */ private MailSender sender; /** The mail. */ private SimpleMailMessage mail; public void sendMail() { this.mail.setSubject("Subject"); this.mail.setText("msg body"); try { getSender().send(this.mail); } catch (MailException e) { log.error("Error sending mail!",e); } } public SimpleMailMessage getMail() { return this.mail; } public void setMail(SimpleMailMessage mail) { this.mail = mail; } public MailSender getSender() { return this.sender; } public void setSender(MailSender mailSender1) { this.sender = mailSender1; } Everything worked before, I am wondering if there may be any conflicts with new libraries.

    Read the article

  • ASP.NET MVC 2 InputExtensions different on server than local machine

    - by Mike
    Hi everyone, So this is kind of a crazy problem to me, but I've had no luck Googling it. I have an ASP.NET MVC 2 application (under .NET 4.0) running locally just fine. When I upload it to my production server (under shared hosting) I get Compiler Error Message: CS1061: 'System.Web.Mvc.HtmlHelper' does not contain a definition for 'TextBoxFor' and no extension method 'TextBoxFor' accepting a first argument of type 'System.Web.Mvc.HtmlHelper' could be found (are you missing a using directive or an assembly reference?) for this code: <%= this.Html.TextBoxFor(person => person.LastName) %> This is one of the new standard extension methods in MVC 2. So I wrote some diagnostic code: System.Reflection.Assembly ass = System.Reflection.Assembly.GetAssembly(typeof(InputExtensions)); Response.Write("From GAC: " + ass.GlobalAssemblyCache.ToString() + "<br/>"); Response.Write("ImageRuntimeVersion: " + ass.ImageRuntimeVersion.ToString() + "<br/>"); Response.Write("Version: " + System.Diagnostics.FileVersionInfo.GetVersionInfo(ass.Location).ToString() + "<br/>"); foreach (var method in typeof(InputExtensions).GetMethods()) { Response.Write(method.Name + "<br/>"); } running locally (where it works fine), I get this as output: From GAC: True ImageRuntimeVersion: v2.0.50727 Version: File: C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll InternalName: System.Web.Mvc.dll OriginalFilename: System.Web.Mvc.dll FileVersion: 2.0.50217.0 FileDescription: System.Web.Mvc.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.50217.0 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral CheckBox CheckBox CheckBox CheckBox CheckBox CheckBox CheckBoxFor CheckBoxFor CheckBoxFor Hidden Hidden Hidden Hidden HiddenFor HiddenFor HiddenFor Password Password Password Password PasswordFor PasswordFor PasswordFor RadioButton RadioButton RadioButton RadioButton RadioButton RadioButton RadioButtonFor RadioButtonFor RadioButtonFor TextBox TextBox TextBox TextBox TextBoxFor TextBoxFor TextBoxFor ToString Equals GetHashCode GetType and when running on the production server (where it fails), I see: From GAC: True ImageRuntimeVersion: v2.0.50727 Version: File: C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll InternalName: System.Web.Mvc.dll OriginalFilename: System.Web.Mvc.dll FileVersion: 2.0.41001.0 FileDescription: System.Web.Mvc.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.41001.0 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral CheckBox CheckBox CheckBox CheckBox CheckBox CheckBox Hidden Hidden Hidden Hidden Hidden Hidden Password Password Password Password RadioButton RadioButton RadioButton RadioButton RadioButton RadioButton TextBox TextBox TextBox TextBox ToString Equals GetHashCode GetType note that "TextBoxFor" is not present (hence the error). I have MVC referenced in the csproj: <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>True</SpecificVersion> <HintPath>References\System.Web.Mvc.dll</HintPath> <Private>True</Private> </Reference> I just can't figure it what to do next. Thoughts? Thanks! -Mike

    Read the article

  • Automatically extracting inline XSD from WSDL into XSD file(s)

    - by Steven Geens
    I am using a third party Web Service whose definition and implementation are beyond my control. This web service will change in the future. The Web Service should be used to generate an XML file which contains some of the same data (represented by the same XSD types) as the Web Service plus some extra information generated by the program. My approach: create my own XSD referring to the XSD definitions of the WSDL of the called web service (This XSD also includes XSD types for the extra information obviously.) use a Java XML databinding framework (like ADB or JiXB) to generate the databinding classes from my own XSD file from step 1 use a Java SOAP framework (like Axis2 or CXF) with the same databinding framework to generate the databinding classes from the WSDL (This would enable me to use the objects retrieved by the web service directly in the generation of the XML.) The XSD types I am going to use in my own XSD file, but are defined in the WSDL, are subject to change. Whenever they change, I would like to automatically process the XSD and WSDL databinding again. (If the change is significant enough, this might trigger some development effort.(But usually not.)) My problem: In step 1 I need an XSD referring to the same types as used by the Web Service. The WSDL is referring to another WSDL, which is referring to another WSDL etc. Eventually there is an WSDL with the needed inline XSD types. As far as I know there is no way to directly reference the inline XSD types of a WSDL from an XSD. The approach I would think most viable, is to include an extra step in the automatic processing (before the databinding) that extracts the inline XSD from the WSDL into other XSD file(s). These other XSD file(s) can then be referred to by my own XSD file. Things I'd like to avoid: Manually copy pasting the inline XSD into an XSD file (I am looking for an automatic process.) Any manual steps.(Like the determining the WSDL that contains the inline types manually.(The location of that WSDL does change as well.)) Using xsd:any in my own XSD. I would like my own XSD file to be correct. Using a non-Java technology(like .NET) Huge amounts of implementation (but hints on how you would implement such an extraction are welcome anyway) PS: I found some similar questions, but they all had responses like: WTH would you want to do that? That is the reason for my rather large background story.

    Read the article

  • DataForm commit button is not enabled when data changed.

    - by Grayson Mitchell
    This is a weird problem. I am using a dataform, and when I edit the data the save button is enabled, but the cancel button is not. After looking around a bit I have found that I have to implement the IEditableObject in order to cancel an edit. Great I did that (and it all works), but now the commit button (Save) is grayed out, lol. Anyone have any idea's why the commit button will not activate any more? Xaml <df:DataForm x:Name="_dataForm" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="All"> <df:DataForm.EditTemplate > <DataTemplate> <StackPanel Name="rootPanel" Orientation="Vertical" df:DataField.IsFieldGroup="True"> <!-- No fields here. They will be added at run-time. --> </StackPanel> </DataTemplate> </df:DataForm.EditTemplate> </df:DataForm> binding DataContext = this; _dataForm.ItemsSource = _rows; ... TextBox textBox = new TextBox(); Binding binding = new Binding(); binding.Path = new PropertyPath("Data"); binding.Mode = BindingMode.TwoWay; binding.Converter = new RowIndexConverter(); binding.ConverterParameter = col.Value.Label; textBox.SetBinding(TextBox.TextProperty, binding); dataField.Content = textBox; // add DataField to layout container rootPanel.Children.Add(dataField); Data Class definition public class Row : INotifyPropertyChanged , IEditableObject { public void BeginEdit() { foreach (var item in _data) { _cache.Add(item.Key, item.Value); } } public void CancelEdit() { _data.Clear(); foreach (var item in _cache) { _data.Add(item.Key, item.Value); } _cache.Clear(); } public void EndEdit() { _cache.Clear(); } private Dictionary<string, object> _cache = new Dictionary<string, object>(); private Dictionary<string, object> _data = new Dictionary<string, object>(); public object this[string index] { get { return _data[index]; } set { _data[index] = value; OnPropertyChanged("Data"); } } public object Data { get { return this; } set { PropertyValueChange setter = value as PropertyValueChange; _data[setter.PropertyName] = setter.Value; } } public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } }

    Read the article

  • EJB3 - using 2 persistence units within a transaction (Exception: Local transaction already has 1 no

    - by Sorcha
    I am trying to use 2 persistence units within the same transaction in a JEE application deployed on Glassfish. The 2 persistence units are defined in persistence.xml, as follows: <persistence-unit name="BeachWater"> <jta-data-source>jdbc/BeachWater</jta-data-source> ... <persistence-unit name="LIMS"> <jta-data-source>jdbc/BeachWaterLIMS</jta-data-source> ... These persistence units correspond to JDBC resources and connection pools which I had defined in Glassfish as follows (include one here as both are identical apart from names & database connection info): JDBC Resource: JNDI Name: jdbc/BeachWaterLIMS Pool Name: BEACHWATER_LIMS Connection Pool: Name: BEACHWATER_LIMS Datasource Classname: com.microsoft.sqlserver.jdbc.SQLServerConnectionPoolDataSource Resource Type: javax.sql.ConnectionPoolDataSource There are 3 stateless session beans, LimsServiceBean, AnalysisServiceBean and AnalysisDataTransformationServiceBean. Here are the relevant snippets from LimsServiceBean: @PersistenceContext(unitName = "LIMS") EntityManager em; ... public ArrayList<Sample> getLatestLIMSData() { Query q = em.createNamedQuery("Sample.findBySubTypeStatus"); return new ArrayList<Sample>(q.getResultList()); } From AnalysisServiceBean: @PersistenceContext(unitName = "BeachWater") EntityManager em; ... public ArrayList<AnalysisType> getAllAnalysisTypes() { Query q = em.createNamedQuery("AnalysisType.findAll"); return new ArrayList<AnalysisType>(q.getResultList()); } And from AnalysisDataTransformationServiceBean: @EJB private AnalysisService analysisService; @EJB private LimsService limsService; public void transformData() { List<AnalysisType> analysisTypes = analysisService.getAllAnalysisTypes(); ArrayList<Sample> samples = limsService.getLatestLIMSData(); This call to limsService.getLatestLIMSData() caused the following exception: [exec] Caused by: javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean; nested exception is: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))): oracle.toplink.essentials.exceptions.DatabaseException [exec] Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: java.lang.IllegalStateException: Local transaction already has 1 non-XA Resource: cannot add more resources. Having consulted this page, http://msdn.microsoft.com/en-us/library/ms378484.aspx (among many others), I tried changing the definition of the connection pools to: Connection Pool: Name: BEACHWATER_LIMS Datasource Classname: com.microsoft.sqlserver.jdbc.SQLServerXADataSource Resource Type: javax.sql.XADataSource Ping via the Glassfish admin console succeeds, but call to analysisService.getAllAnalysisTypes() now throws an exception: Caused by: javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean; nested exception is: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))): oracle.toplink.essentials.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: javax.transaction.SystemException Any ideas?

    Read the article

  • Can't see why JavaMail doesn't work in a Spring MVC application on Tomcat

    - by Kartoch
    I have wrote a small web application using Spring MVC. It runs on tomcat and use the JavaMail library. At the present time I can't find why the mails are not sent. But I have no pertinent log messages in tomcat log files to find where is the problem. At the present time, my logging is configured in a log4j.properties file in the root of my CLASSPATH: log4j.rootLogger= DEBUG, CONSOLE log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n How can i see the java mail log message in debug mode ? I think it is related to JDK logging system (as I'm using Sun's JavaMail) but I don't know how to configure it. Edit: Well, one problem solves and another arises. I change my bean definition to include debug support for javamail: mail.debug=true But still no pertinent info in it: DEBUG: JavaMail version 1.4.1ea-SNAPSHOT DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers (No such file or directory) DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.providers DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name:{com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.address.map DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map (No such file or directory)

    Read the article

  • C++ template-function -> passing a template-class as template-argument

    - by SeMa
    Hello, i try to make intensive use of templates to wrap a factory class: The wrapping class (i.e. classA) gets the wrapped class (i.e. classB) via an template-argument to provide 'pluggability'. Additionally i have to provide an inner-class (innerA) that inherits from the wrapped inner-class (innerB). The problem is the following error-message of the g++ "gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)": sebastian@tecuhtli:~/Development/cppExercises/functionTemplate$ g++ -o test test.cpp test.cpp: In static member function ‘static classA<A>::innerA<iB>* classA<A>::createInnerAs(iB&) [with iB = int, A = classB]’: test.cpp:39: instantiated from here test.cpp:32: error: dependent-name ‘classA::innerA<>’ is parsed as a non-type, but instantiation yields a type test.cpp:32: note: say ‘typename classA::innerA<>’ if a type is meant As you can see in the definition of method createInnerBs, i intend to pass a non-type argument. So the use of typename is wrong! The code of test.cpp is below: class classB{ public: template < class iB> class innerB{ iB& ib; innerB(iB& b) :ib(b){} }; template<template <class> class classShell, class iB> static classShell<iB>* createInnerBs(iB& b){ // this function creates instances of innerB and its subclasses, // because B holds a certain allocator return new classShell<iB>(b); } }; template<class A> class classA{ // intention of this class is meant to be a pluggable interface // using templates for compile-time checking public: template <class iB> class innerA: A::template innerB<iB>{ innerA(iB& b) :A::template innerB<iB>(b){} }; template<class iB> static inline innerA<iB>* createInnerAs(iB& b){ return A::createInnerBs<classA<A>::template innerA<> >(b); // line 32: error occurs here } }; typedef classA<classB> usable; int main (int argc, char* argv[]){ int a = 5; usable::innerA<int>* myVar = usable::createInnerAs(a); return 0; } Please help me, i have been faced to this problem for several days. Is it just impossible, what i'm trying to do? Or did i forgot something? Thanks, Sebastian

    Read the article

  • Storing objects in the array

    - by Ockonal
    Hello, I want to save boost signals objects in the map (association: signal name ? signal object). The signals signature is different, so the second type of map should be boost::any. map<string, any> mSignalAssociation; The question is how to store objects without defining type of new signal signature? typedef boost::signals2::signal<void (int KeyCode)> sigKeyPressed; mSignalAssociation.insert(make_pair("KeyPressed", sigKeyPressed())); // This is what I need: passing object without type definition mSignalAssociation["KeyPressed"] = (typename boost::signals2::signal<void (int KeyCode)>()); // One more trying which won't work. And I don't want use this sigKeyPressed mKeyPressed; mSignalAssociation["KeyPressed"] = mKeyPressed; All this tryings throw the error: /usr/include/boost/noncopyable.hpp: In copy constructor ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’: In file included from /usr/include/boost/signals2/detail/signals_common.hpp:17:0, /usr/include/boost/noncopyable.hpp:27:7: error: ‘boost::noncopyable_::noncopyable::noncopyable(const boost::noncopyable_::noncopyable&)’ is private /usr/include/boost/signals2/signal_base.hpp:22:5: error: within this context ---------- /usr/include/boost/signals2/detail/signal_template.hpp: In copy constructor ‘boost::signals2::signal1<void, int&, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’: In file included from /usr/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:52:0, /usr/include/boost/signals2/detail/signal_template.hpp:578:5: note: synthesized method ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’ first required here from /usr/include/boost/signals2.hpp:16, --------- /usr/include/boost/signals2/preprocessed_signal.hpp: In copy constructor ‘boost::signals2::signal<void(int)>::signal(const boost::signals2::signal<void(int)>&)’: In file included from /usr/include/boost/signals2/signal.hpp:36:0, /usr/include/boost/signals2/preprocessed_signal.hpp:42:5: note: synthesized method ‘boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’ first required here from /home/ockonal/Workspace/Projects/Pseudoform-2/include/Core/Systems.hpp:6,

    Read the article

  • WCF: WSDL-first approach: Problems with generating fault types

    - by Juri
    Hi, I'm currently in the process of creating a WCF webservice which should be compatible with WS-I Basic Profile 1.1. I'm using a wsdl-first approach (actually for the first time), defining first the xsd for the complex types, the WSDL and then using svcutil.exe for generating the according server as well as client-side interfaces/proxies. So far everything works fine. Then I decided to add a fault to my WSDL. Regenerating with svcutil succeeded, but then I noticed that my generated fault doesn't have the properties I defined in my xsd file (which is imported by my WSDL). Fault XSD definition <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://product.mycompany.com/groupsfault_v1.xsd" xmlns:tns="http://product.mycompany.com/groupsfault_v1.xsd"> <complexType name="groupsFault"> <sequence> <element name="code" type="int"/> <element name="message" type="string"/> </sequence> </complexType> </schema> Generated .Net fault object [System.Diagnostics.DebuggerStepThroughAttribute()] [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Xml.Serialization.XmlSchemaProviderAttribute("ExportSchema")] [System.Xml.Serialization.XmlRootAttribute(IsNullable=false)] public partial class groupFault : object, System.Xml.Serialization.IXmlSerializable { private System.Xml.XmlNode[] nodesField; private static System.Xml.XmlQualifiedName typeName = new System.Xml.XmlQualifiedName("groupFault", "http://sicp.services.siag.it/groups_v1.wsdl"); public System.Xml.XmlNode[] Nodes { get { return this.nodesField; } set { this.nodesField = value; } } public void ReadXml(System.Xml.XmlReader reader) { this.nodesField = System.Runtime.Serialization.XmlSerializableServices.ReadNodes(reader); } public void WriteXml(System.Xml.XmlWriter writer) { System.Runtime.Serialization.XmlSerializableServices.WriteNodes(writer, this.Nodes); } public System.Xml.Schema.XmlSchema GetSchema() { return null; } public static System.Xml.XmlQualifiedName ExportSchema(System.Xml.Schema.XmlSchemaSet schemas) { System.Runtime.Serialization.XmlSerializableServices.AddDefaultSchema(schemas, typeName); return typeName; } } Is this ok?? I'd expect to have an object created that contains properties for "code" and "message" s.t. you can then throw it by using something like ... throw new FaultException<groupFault>(new groupFault { code=100, message="error" }); ... (sorry for the lower-case type definitions, but this is generated code from the WSDL) Why doesn't the svcutil.exe generate those properties?? Some sources on the web suggested to add the /useSerializerForFaults option. I tried it, it doesn't work giving me an exception that the fault type is missing on the wsdl:portType declaration. Validation with several other tools succeeded however. Any help is VERY appreciated :) thx

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • Post parameters are becoming null (at random)

    - by Raghuram Duraisamy
    Hi, My web application is developed with Struts2 and it was working fine till recently. All of a sudden one of the modules has started malfunctioning. The malfunctioning module is 'Update Student details' page. This page has a lot of fields like 'schoolName', 'degreeName', etc . School 1: <input name="schoolName"> School 2: <input name="schoolName"> ..... School n: <input name="schoolName"> As mentioned earlier, the page was working perfectly fine till recently. Now, one/many of the values of 'schoolName', 'degreeName', etc are being received as "" (EMPTY STRING) on the server-side. For debugging, I used firebug and remote-debugging in eclipse. I find that the post-parameters are correct on the client-side. For instance, during one of the submissions the post-parameters were as below (i noted them from firebug). Content-Type: multipart/form-data; boundary=---------------------------2921238217421 Content-Length: 48893 <OTHER_PARAMETERS> <!--Truncated for clarity --> -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" ABC Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Test School -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XYZ -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Texas Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XXXX School -----------------------------2921238217421-- But on the server-side, the request params were as below: schoolName=[ABC Institute, Test School, XYZ, , XXXX School], "Texas Institute" was received as "" (EMPTY STRING) in this particular case. This is not happening consistently. The parameters that become NULL (or EMPTY STRING) seem random to me - during one instance, parameter schoolName[3] became null as illustrated above, parameter schoolName[2] became null during yet another submission, etc. At times, none of the parameters are nullified. The following is the list of the interceptors in the action definition. List of interceptors: ---------------------- FileUploadInterceptor org.apache.struts2.interceptor.FileUploadInterceptor ServletConfigInterceptor org.apache.struts2.interceptor.ServletConfigInterceptor StaticParametersInterceptor com.opensymphony.xwork2.interceptor.StaticParametersInterceptor ParametersInterceptor com.opensymphony.xwork2.interceptor.ParametersInterceptor MyCustomInterceptor com.xxxx.yyyy.interceptors.GetLoggedOnUserInterceptor This issue appears rather weird to me and I have not been able to zero-in on the exact cause of the issue. Any help in this regard would be highly appreciated. Thanks in advance. Thanks, Raghuram

    Read the article

  • Java Builder pattern with Generic type bounds

    - by I82Much
    Hi all, I'm attempting to create a class with many parameters, using a Builder pattern rather than telescoping constructors. I'm doing this in the way described by Joshua Bloch's Effective Java, having private constructor on the enclosing class, and a public static Builder class. The Builder class ensures the object is in a consistent state before calling build(), at which point it delegates the construction of the enclosing object to the private constructor. Thus public class Foo { // Many variables private Foo(Builder b) { // Use all of b's variables to initialize self } public static final class Builder { public Builder(/* required variables */) { } public Builder var1(Var var) { // set it return this; } public Foo build() { return new Foo(this); } } } I then want to add type bounds to some of the variables, and thus need to parametrize the class definition. I want the bounds of the Foo class to be the same as that of the Builder class. public class Foo<Q extends Quantity> { private final Unit<Q> units; // Many variables private Foo(Builder<Q> b) { // Use all of b's variables to initialize self } public static final class Builder<Q extends Quantity> { private Unit<Q> units; public Builder(/* required variables */) { } public Builder units(Unit<Q> units) { this.units = units; return this; } public Foo build() { return new Foo<Q>(this); } } } This compiles fine, but the compiler is allowing me to do things I feel should be compiler errors. E.g. public static final Foo.Builder<Acceleration> x_Body_AccelField = new Foo.Builder<Acceleration>() .units(SI.METER) .build(); Here the units argument is not Unit<Acceleration> but Unit<Length>, but it is still accepted by the compiler. What am I doing wrong here? I want to ensure at compile time that the unit types match up correctly.

    Read the article

  • How does mysql define DISTINCT() in reference documentation

    - by goran
    EDIT: This question is about finding definitive reference to MySQL syntax on SELECT modifying keywords and functions. /EDIT AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax, but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best EDIT After asking the same question on mysql forums I learned that while parsing the SQL mysql does not care about whitespace between functions and column names (but I am still missing a reference). As it seems you can have whitespace between functions and the parenthesis SELECT LEFT (field1,1), field2... and get mysql to understand it as SELECT LEFT(field,1) Similarly SELECT DISTINCT(field1), field2... seems to get decomposed to SELECT DISTINCT (field1), field2... and then DISTINCT is taken not as some undefined (or undocumented) function, but as SELECT modifying keyword and the parenthesis around field1 are evaluated as if they were part of field expression. It would be great if someone would have a pointer to documentation where it is stated that the whitespace between functions and parenthesis is not significant or to provide links to apropriate MySQL forums, mailing lists where I could raise a question to put this into reference. EDIT I have found a reference to server option IGNORE SPACE. It states that "The IGNORE SPACE SQL mode can be used to modify how the parser treats function names that are whitespace-sensitive", later on it states that recent versions of mysql have reduced this number from 200 to 30. One of the remaining 30 is COUNT for example. With IGNORE SPACE enabled both SELECT COUNT(*) FROM mytable; SELECT COUNT (*) FROM mytable; are legal. So if this is an exception, I am left to conclude that normally functions ignore space by default. If functions ignore space by default then if the context is ambiguous, such as for the first function on a first item of the select expression, then they are not distinguishable from keywords and the error can not be thrown and MySQL must accept them as keywords. Still, my conclusions feel like they have lot of assumptions, I would still be grateful and accept any pointers to see where to follow up on this.

    Read the article

  • How to get rid of annoying HorizontalContentAlignment binding warning?

    - by marco.ragogna
    I am working on a large WPF project and during debug my output window is filled with these annoying warnings: System.Windows.Data Information: 10 : Cannot retrieve value using the binding and no valid fallback value exists; using default instead. BindingExpression:Path=HorizontalContentAlignment; DataItem=null; target element is 'ComboBoxItem' (Name=''); target property is 'HorizontalContentAlignment' (type ' HorizontalAlignment') In the specific example ComboBoxItem is styled in this way: <Style x:Key="{x:Type ComboBoxItem}" TargetType="{x:Type ComboBoxItem}"> <Setter Property="OverridesDefaultStyle" Value="True"/> <Setter Property="SnapsToDevicePixels" Value="True"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ComboBoxItem}"> <Border Name="bd" Padding="4,4,4,4" SnapsToDevicePixels="True" CornerRadius="2,2,2,2"> <ContentPresenter /> </Border> <ControlTemplate.Triggers> <Trigger Property="IsHighlighted" Value="true"> <Setter TargetName="bd" Property="Background" Value="{StaticResource MediumBrush}"/> <Setter TargetName="bd" Property="Padding" Value="4,4,4,4"/> <Setter TargetName="bd" Property="CornerRadius" Value="2,2,2,2"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> I know that the problem is generated by the default theme definition for ComboBoxItem that contains things like: <Setter Property="Control.HorizontalContentAlignment"> <Setter.Value> <Binding Path="HorizontalContentAlignment" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType=ItemsControl, AncestorLevel=1}" /> </Setter.Value> </Setter> but I also thought that using <Setter Property="OverridesDefaultStyle" Value="True"/> would taken care of the problem, and instead warnings are still there. Any help is really appreciated

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >