Search Results

Search found 17289 results on 692 pages for 'xml transform'.

Page 263/692 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • ?Pick-Up????????????Web??????????Oracle WebLogic Server 11g?Microsoft .NET WCF 4.0????? |WebLogic Channel|??????

    - by ???02
    ???????????????????????????????WebLogic Server?Microsoft .NET?????????????????????WS-*???Oracle??????(Oracle JDeveloper 11g?Oracle WebLogic Server??)??????????Web?????????????Microsoft??????(Visual Studio.NET 2010?Microsoft.NET 4.0 Framework?Windows Communication Foundation 4.0??)?????Web?????????????????????????????????????????????????·????????????????·?????????????????????????·??????????????????????????????????¦Oracle JDeveloper 11g¦Oracle WebLogic Server 11g¦Java API for XML Web Services(JAX-WS)2.0(JSR-224)¦Java Platform, Enterprise Edition(Java EE)¦Microsoft Visual Studio 2010¦Microsoft.NET 4.0¦Windows Communication Foundation(WCF)4.0¦WS-Security¦WS-SecurityPolicy¦WS-Profile¦X.509 Token Profile¦?????????????¦X.509???????¦XML¦C#?????

    Read the article

  • Unable to start Tomcat6 with HTTPS enabled

    - by ram
    I have the following server.xml settings for my tomcat6 server <!-- COMMENTED <Connector port="8080" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="8443"/> --> <!-- COMMENTED <Connector port="80" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="443"/> --> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="%SSL_CERT%" SSLCertificateKeyFile="%SSL_KEY%" SSLCipherSuite="ALL:!ADH:!kEDH:!SSLv2:!EXPORT40:!EXP:!LOW" compression="on" compressableMimeType="text/html,text/xml,text/plain,application/javascript,application/json,text/javascript"/> Complete server.xml is here but when I try to start the application I get the following error in catalina.*.log file INFO: Initializing Coyote HTTP/1.1 on http-80 Apr 7, 2013 8:38:38 PM org.apache.coyote.http11.Http11AprProtocol init SEVERE: Error initializing endpoint java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.tomcat.jni.SSLContext.make(Native Method) at org.apache.tomcat.util.net.AprEndpoint.init(AprEndpoint.java:729) at org.apache.coyote.http11.Http11AprProtocol.init(Http11AprProtocol.java:107) at org.apache.catalina.connector.Connector.initialize(Connector.java:1049) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Apr 7, 2013 8:38:38 PM org.apache.catalina.core.StandardService initialize SEVERE: Failed to initialize connector [Connector[HTTP/1.1-443]] LifecycleException: Protocol handler initialization failed: java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.catalina.connector.Connector.initialize(Connector.java:1051) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) I've checked the following things already I have given read permissions for everyone for .crt and .key files I copied server.xml to a different working tomcat6 server and it works there, server.xml from the mentioned working tomcat5 webserver doesn't work here and it fails with the same error Works well with just HTTP enabled explicitly mentioning protocol in the Connector i.e. protocol="org.apache.coyote.http11.Http11AprProtocol" results in the same exception Please help me if I am missing something. Thanks in advance

    Read the article

  • netsh wlan add profile not importing encrypted passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings or am I missing something about adding profiles? Here is the exported xml with personal information removed: <?xml version="1.0"?> <WLANProfile xmlns="http://www.microsoft.com/networking/WLAN/profile/v1"> <name>[removed]</name> <SSIDConfig> <SSID> <hex>[removed]</hex> <name>[removed]</name> </SSID> <nonBroadcast>false</nonBroadcast> </SSIDConfig> <connectionType>ESS</connectionType> <connectionMode>auto</connectionMode> <autoSwitch>false</autoSwitch> <MSM> <security> <authEncryption> <authentication>WPAPSK</authentication> <encryption>TKIP</encryption> <useOneX>false</useOneX> </authEncryption> <sharedKey> <keyType>passPhrase</keyType> <protected>true</protected> <keyMaterial>[removed]</keyMaterial> </sharedKey> </security> </MSM> </WLANProfile> Any help or advice is appreciated. Thanks. Update: It seems if I export the settings using key=clear, the passphrase is stored in the file unprotected and I can import the file on another computer without issue. I've updated my question to reflect my findings.

    Read the article

  • netsh wlan add profile not importing passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings or am I missing something about adding profiles? Here is the exported xml with personal information removed: <?xml version="1.0"?> <WLANProfile xmlns="http://www.microsoft.com/networking/WLAN/profile/v1"> <name>[removed]</name> <SSIDConfig> <SSID> <hex>[removed]</hex> <name>[removed]</name> </SSID> <nonBroadcast>false</nonBroadcast> </SSIDConfig> <connectionType>ESS</connectionType> <connectionMode>auto</connectionMode> <autoSwitch>false</autoSwitch> <MSM> <security> <authEncryption> <authentication>WPAPSK</authentication> <encryption>TKIP</encryption> <useOneX>false</useOneX> </authEncryption> <sharedKey> <keyType>passPhrase</keyType> <protected>true</protected> <keyMaterial>[removed]</keyMaterial> </sharedKey> </security> </MSM> </WLANProfile> Any help or advice is appreciated. Thanks.

    Read the article

  • CommunicationException when shutting down JBoss 4.2.2

    - by Brian
    I have deployed an application using JBoss 4.2.2 on a 64-bit RHEL5 server. Since there are other JBoss servers, I had to change some port configurations so that there would be no conflicts when starting the server. So right now I'm using ports-01 from the sample-bindings.xml file that came in the docs/examples/binding-manager/samples directory. In addition, below is a list of all the files I've edited to reflect the new ports: JBOSS_HOME/servers/default/deploy/jboss-web.deployer/server.xml: Changed Connector port - 8080 to 8180 Changed AJP 1.3 Connector port - 8009 to 8109 JBOSS_HOME/server/default/deploy/jbossws.beans/META-INF/jboss-beans.xml Changed 8080 to 8180 JBOSS_HOME/server/default/conf/jboss-service.xml: Changed 8083 to 8183 Changed 1099 to 1299 Changed 1098 to 1298 Changed 4444 to 4644 Changed 4445 to 4645 Changed 4446 to 4646 Changed 4447 to 4647 JBOSS_HOME/server/default/conf/jboss-minimal.xml: Changed 1099 to 1299 Changed 1098 to 1298 When I start the server (binding to localhost) everything is fine and I'm able to access the application. But when I try to shutdown the server I get the following error: Exception in thread "main" javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost [Root exception is javax.naming.CommunicationException : Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1562) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:634) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:627) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.jboss.Shutdown.main(Shutdown.java:214) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:274) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1533) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:248) ... 5 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.(Socket.java:372) at java.net.Socket.(Socket.java:273) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:84) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:77) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:244) ... 5 more Is there any other file that I need to change the 1099 to 1299, or am I missing some other step?

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • How to easily change columns into rows in Excel?

    - by DavidD
    I have a huge Excel table that I need to transform into paragraphs for a Word report, and I can't find an efficient way to do it The source looks like this: And I would ultimately need something like this, i.e. through a pivot table. Note that "Item C", which doesn't have any description values, is skipped: Now, to get there I believe I need to transform my source to this intermediate format, that has one description per line: How do I get from the source to the intermediate format in an efficient way? Or maybe there is an easier way to produce the target format that I don't know of? Any help is welcome!

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Ubuntu: Getting rid of a mimetype entry

    - by Epaga
    I have a pesky mimetype entry that I can't seem to get rid of. Here is the current situation: xdg-mime query filetype myfile.mfe application/pesky Using assogiate I have found out the information about this mime type entry (but can't delete it there). I have the following 'pesky.xml' XML file which was used to create the mime type (as far as I can tell, since it exactly matches the entry in assogiate...): <?xml version='1.0'?> <mime-info xmlns='http://www.freedesktop.org/standard'> <mime-type type="application/pesky"> <comment>my pesky type</comment> <glob pattern="*.mfe"/> <magic priority="100"> <match type="string" offset="0" value="application/pesky"/> </magic> </mime-type> <mime-info> However, the following has no effect: sudo xdg-mime uninstall --mode system --novendor pesky.xml The file association remains. Any ideas?

    Read the article

  • Mod_rewrite is ignoring the extension of a file

    - by ngl5000
    This is my entire mod_rewrite condition: <IfModule mod_rewrite.c> <Directory /var/www/> Options FollowSymLinks -Multiviews AllowOverride None Order allow,deny allow from all RewriteEngine On # force www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond %{REQUEST_URI} !^(/index\.php|/assets|/robots\.txt|/sitemap\.xml|/favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] # Block access to "hidden" directories or files whose names begin with a period. This # includes directories used by version control systems such as Subversion or Git. RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] </Directory> </IfModule> It is suppose to allow only access to mysite.com(/index.php|/assets|/robots.txt|/sitemap.xml|/favicon.ico) The error was noticed with: mysite.com/sitemap vs mysite.com/sitemap.xml Both of these addresses are resolving to the xml file while the first url should be resolving to mysite.com/index.php/sitemap * For some reason mod_rewrite is completely ignoring the lack of an extension. It sounded like a Multiviews problem to me so I disabled Multiviews and it is still going on. ***And then a different rule will eventually take the index.php out, I am having another problem with an extra '/' being left behind when this happens. This httpd file is setting up for my codeigniter php framework

    Read the article

  • Stop VLC player from saving effects & filter settings

    - by bocamax
    VLC Player v1.1.11 Windows 7 Ultimate VLC player launches each new video with setting I used once to brighten a dark video. On a second computer, VLC player opens all videos rotated 270 which was a feature I used once. I now have to manually uncheck Image adjust on one computer & uncheck Transform on another for each video to have it play without these custom settings. I've custom mapped VLC Hotkeys that I use extensively. As a result, I do not want to "reset preferences" and lose my Hotkey mapping. VLC Tools Settings Video Effects Basic Image Adjust (Brightness, Contrast, etc.) VLC Tools Settings Video Effects Basic Transform (Rotate 90, 180, 270) Is there a setting I'm not finding to stop VLC from applying a one time adjustment to all videos?

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • FTP proxy that translates from passive to active

    - by Jan Aagaard
    Is it possible to install a proxy server that will transform passive ftp to active ftp? Details of my problem: I would like to deploy my web sites using Visual Studio's built in publish web site function. The problem is that my web hotel only supports active ftp, and unfortunately Visual Studio 2010 has a bug, so the publish function only works with passive ftp. My idear is to install a tiny local ftp proxy, that is able to transform passive ftp mode to active mode. I would then enter localhost as the publish server in Visual Studio, and the proxy would do the actual uploading of the files to my web hotel. Visual Studio bug report: Unable to publish website to FTP server that doesn't allow passive mode.

    Read the article

  • Nested %..% representation in Windows 7

    - by prosseek
    In order to organize the PATH environment, I came up with the following method. Setup variable : ABC = "PATH1;PATH2" Set the path variable using the predefined setup variable : Path = %ABC%%DEF% ... It seems to work fine, but I found that some of the variables are not transfomred into real paths when I run the set command. (I mean, the PATH=PATH1;PATH2;%DEF%, for example). What I found was as follows. If the variable name has '_', Windows can't transform it. If the variable has the path with %...% variable, windows can't transform it. Q1 : Am I correct? Q2 : If so, is there a way to by pass the nested %...% varibale problem?

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • Code-First Database Creation During TFS 2010 CI Build

    - by jedimindtrickster
    I would like to automate code-first database generation during the automated CI build of a web project in Team Foundation Server 2010. When run locally the tests create a code-first database specified by the connection string in the app.config of the tests project. How do I configure the TFS Build Configuration to mimic this behaviour on the TFS build server? Edit The problem, it turns out, was that the TFS build server was successfully running the test which was using the default connection string in the app.config which pointed to the local SQL Server, not where I expected it. The solution was to use SlowCheetah on the TFS server as a means to transform the App.config file using the QA transform as per this blog article.

    Read the article

  • Validate java SAML signature from C#

    - by Adrya
    How can i validate in .Net C# a SAML signature created in Java? Here is the SAML Signature that i get from Java: <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"> </ds:SignatureMethod> <ds:Reference URI="#_e8bcba9d1c76d128938bddd5ae8c68e1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"> </ds:Transform> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="code ds kind rw saml samlp typens #default xsd xsi"> </ec:InclusiveNamespaces> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"> </ds:DigestMethod> <ds:DigestValue>zEL7mB0Wkl+LtjMViO1imbucXiE=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> jpIX3WbX9SCFnqrpDyLj4TeJN5DGIvlEH+o/mb9M01VGdgFRLtfHqIm16BloApUPg2dDafmc9DwL Pyvs3TJ/hi0Q8f0ucaKdIuw+gBGxWFMcj/U68ZuLiv7U+Qe7i4ZA33rWPorkE82yfMacGf6ropPt v73mC0bpBP1ubo5qbM4= </ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate> MIIDBDCCAeygAwIBAgIIC/ktBs1lgYcwDQYJKoZIhvcNAQEFBQAwNzERMA8GA1UEAwwIQWRtaW5D QTExFTATBgNVBAoMDEVKQkNBIFNhbXBsZTELMAkGA1UEBhMCU0UwHhcNMDkwMjIzMTAwMzEzWhcN MTgxMDE1MDkyNTQyWjBaMRQwEgYDVQQDDAsxMC41NS40MC42MTEbMBkGA1UECwwST24gRGVtYW5k IFBsYXRmb3JtMRIwEAYDVQQLDAlPbiBEZW1hbmQxETAPBgNVBAsMCFNvZnR3YXJlMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQCk5EqiedxA6WEE9N2vegSCqleFpXMfGplkrcPOdXTRLLOuRgQJ LEsOaqspDFoqk7yJgr7kaQROjB9OicSH7Hhsu7HbdD6N3ntwQYoeNZ8nvLSSx4jz21zvswxAqw1p DoGl3J6hks5owL4eYs2yRHvqgqXyZoxCccYwc4fYzMi42wIDAQABo3UwczAdBgNVHQ4EFgQUkrpk yryZToKXOXuiU2hNsKXLbyIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSiviFUK7DUsjvByMfK g+pm4b2s7DAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQEF BQADggEBAKb94tnK2obEyvw8ZJ87u7gvkMxIezpBi/SqXTEBK1by0NHs8VJmdDN9+aOvC5np4fOL fFcRH++n6fvemEGgIkK3pOmNL5WiPpbWxrx55Yqwnr6eLsbdATALE4cgyZWHl/E0uVO2Ixlqeygw XTfg450cCWj4yfPTVZ73raKaDTWZK/Tnt7+ulm8xN+YWUIIbtW3KBQbGomqOzpftALyIKLVtBq7L J0hgsKGHNUnssWj5dt3bYrHgzaWLlpW3ikdRd67Nf0c1zOEgKHNEozrtRKiLLy+3bIiFk0CHImac 1zeqLlhjrG3OmIsIjxc1Vbc0+E+z6Unco474oSGf+D1DO+Y= </ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> I know to parse SAML, i need to validate the signature. I tried this: public bool VerifySignature() { X509Certificate2 certificate = null; XmlDocument doc = new XmlDocument(); XmlElement xmlAssertionElement = this.GetXml(doc); doc.AppendChild(xmlAssertionElement); // Create a new SignedXml object and pass it // the XML document class. SamlSignedXml signedXml = new SamlSignedXml(xmlAssertionElement); // Get signature XmlElement xmlSignature = this.Signature; if (xmlSignature == null) { return false; } // Load the signature node. signedXml.LoadXml(xmlSignature); // Get the certificate used to sign the assertion if information about this // certificate is available in the signature of the assertion. foreach (KeyInfoClause clause in signedXml.KeyInfo) { if (clause is KeyInfoX509Data) { if (((KeyInfoX509Data)clause).Certificates.Count > 0) { certificate = (X509Certificate2)((KeyInfoX509Data)clause).Certificates[0]; } } } if (certificate == null) { return false; } return signedXml.CheckSignature(certificate, true); } It validates the signature of a SAML signed in .Net but not of this Java one.

    Read the article

  • How do i use GraphMLReader2 in Jung?

    - by askus
    I want to use class GraphMLReader to read a Undirected Graph from graphML with JUNG2.0. The code is as follow: import edu.uci.ics.jung.io.*; import edu.uci.ics.jung.io.graphml.*; import java.io.*; import java.util.*; import org.apache.commons.collections15.Transformer; import edu.uci.ics.jung.graph.*; class Vertex{ int id; String type; String value; } class Edge{ int id ; String type; String value; } public class Loader{ static String src = "test.xsl"; public static void Main( String[] args){ Reader reader = new FileReader(src ); Transformer<NodeMetadata, Vertex> vtrans = new Transformer<NodeMetadata,Vertex>(){ public Vertex transform(NodeMetadata nmd ){ Vertex v = new Vertex() ; v.type = nmd.getProperty("type"); v.value = nmd.getProperty("value"); v.id = Integer.valueOf( nmd.getId() ); return v; } }; Transformer<EdgeMetadata, Edge> etrans = new Transformer<EdgeMetadata,Edge>(){ public Edge transform( EdgeMetadata emd ){ Edge e = new Edge() ; e.type = emd.getProperty("type"); e.value = emd.getProperty("value"); e.id = Integer.valueOf( emd.getId() ); return e; } }; Transformer<HyperEdgeMetadata, Edge> hetrans = new Transformer<HyperEdgeMetadata,Edge>(){ public Edge transform( HyperEdgeMetadata emd ){ Edge e = new Edge() ; e.type = emd.getProperty("type"); e.value = emd.getProperty("value"); e.id = Integer.valueOf( emd.getId() ); return e; } }; Transformer< GraphMetadata , UndirectedSparseGraph> gtrans = new Transformer<GraphMetadata,UndirectedSparseGraph>(){ public UndirectedSparseGraph<Vertex,Edge> transform( GraphMetadata gmd ){ return new UndirectedSparseGraph<Vertex,Edge>(); } }; GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> , Vertex , Edge> gmlr = new GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> ,Vertex, Edge>( reader, gtrans, vtrans, etrans, hetrans); UndirectedSparseGraph<Vertex,Edge> g = gmlr.readGraph(); return ; } } However, compiler alert that: Loader.java:60: cannot find symbol symbol : constructor GraphMLReader2(java.io.Reader,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.GraphMetadata,edu.uci.ics.jung.graph.UndirectedSparseGraph>,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.NodeMetadata,Vertex>,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.EdgeMetadata,Edge>) location: class edu.uci.ics.jung.io.graphml.GraphMLReader2<edu.uci.ics.jung.graph.UndirectedSparseGraph<Vertex,Edge>,Vertex,Edge> new GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> ,Vertex, Edge>( ^ 1 error How can i solve this problem? Thanks.

    Read the article

  • Transforming object world space matrix to a position in world space

    - by Fredrik Boston Westman
    Im trying to make a function for picking objects with a bounding sphere however I have run in to a problem. First I check against my my bounding sphere, then if it checks out then I test against the vertexes. I have already tested my vertex picking method and it work fine, however when I check first with my bounding sphere method it dosnt register anything. My conclusion is that when im transform my sphere position in to the position of the object in world space, the transformation goes wrong ( I base this on the fact the the x coordinate always becomes 1, even tho i translate non of my meshes along the x-axis to 1). So my question is: What is the proper way to transform a objects world space matrix to a position vector ? This is how i do it now: First i set my position vector to 0. XMVECTOR meshPos = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); Then I trannsform it with my object space matrix, and then add the offset to the center of the mesh. meshPos = XMVector3TransformCoord(meshPos, meshWorld) + centerOffset;

    Read the article

  • Oracle Insurance Gets Innovative with Insurance Business Intelligence

    - by nicole.bruns(at)oracle.com
    Oracle Insurance announced yesterday the availability of Oracle Insurance Insight 7.0, an insurance-specific data warehouse and business intelligence (BI) system that transforms the traditional approach to BI by involving business users in the creation and maintenance."Rapid access to business intelligence is essential to compete and thrive in today's insurance industry," said Srini Venkatasantham, vice president, Product Strategy, Oracle Insurance. "The adaptive data modeling approach of Oracle Insurance Insight 7.0, combined with the insurance-specific data model, offers global insurance companies a faster, easier way to get the intelligence they need to make better-informed business decisions." New Features in Oracle Insurance 7.0 include:"Adaptive Data Modeling" via the new warehouse palette: Gives business users the power to configure lines of business via an easy-to-use warehouse palette tool. Oracle Insurance Insight then automatically creates data warehouse elements - such as line-specific database structures and extract-transform-load (ETL) processes -speeding up time-to-value for BI initiatives. Out-of-the-box insurance models or create-from-scratch option: Includes pre-built content and interfaces for six Property and Casualty (P&C) lines. Additionally, insurers can use the warehouse palette to deploy any and all P&C or General Insurance lines of business from scratch, helping insurers support operations in any country.Leverages Oracle technologies: In addition to Oracle Business Intelligence Enterprise Edition, the solution includes Oracle Database 11g as well as Oracle Data Integrator Enterprise Edition 11g, which delivers Extract, Load and Transform (E-L-T) architecture and eliminates the need for a separate transformation server. Additionally, the expanded Oracle technology infrastructure enables support for Oracle Exadata. Martina Conlon, a Principal with Novarica's Insurance practice, and author of Business Intelligence in Insurance: Current State, Challenges, and Expectations says, "The need for continued investment by insurers in business intelligence capabilities is widely understood, and the industry is acting. Arming the business intelligence implementation with predefined insurance specific content, and flexible and configurable technology will get these projects up and running faster."Learn moreTo see a demo of the Oracle Insurance Insight system, click hereTo read the press announcement, click here

    Read the article

  • How can a single script attached to multiple enemies provide separate behavior for each enemy?

    - by Syed
    I am making a TD game and now stucked at multiple enemies using same script. All is well with scripts attached with one enemy only. Problem is in running multiple enemies. Here is the overview. I have a Enemy object with which I have attached a 'RunEnemy' script. Flow of this script is: RunEnemy.cs: PathF p; p = gameObject.GetComponent<PathF>(); //PathF is a pathfinding algo which has 'search; function which returns a array of path from starting position PathList = p.search(starting position); //------------------------------- if(PathList != null) { //if a way found! if(moving forward) transform.Translate(someXvalue,0,0); //translates on every frame until next grid point else if(moving back) transform.Translate(0,someXvalue,0); ...and so on.. if(reached on next point) PathList = p.search(from this point) //call again from next grid point so that if user placed a tower enemy will run again on the returned path } Now I have attached this script and "PathF.cs" to a single enemy which works perfect. I have then made one more enemy object and attached both of these script to it as well, which is not working they both are overlapping movements. I can't understand why, I have attached these scripts on two different gameobjects but still their values change when either enemy changes its value. I don't want to go with a separate script for each enemy because there would be 30 enemies in a scene. How can I fix this problem?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >