Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 407/3479 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Nginx virtual server only partially working with Java application

    - by MFB
    Final Cut Server is a Java application (made by Apple) which launches from a web page. I have Nginx in front of this web server (amongst others) and, whilst the web server can be browsed externally, the Java app fails to launch correctly and throws the errors below. Can anyone offer clues as to what additional config I many have to add to Nginx to get this working? My existing Nginx config: user xxxx; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { server_name _; return 444; } upstream fcs-site { server 10.10.5.20:8080; } server { listen 80; server_name example.com 10.10.5.90; access_log /var/log/nginx/fcs_access.log; error_log /var/log/nginx/fcs_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://fcs-site; proxy_redirect off; } } upstream myapp-site { server 127.0.0.1:6543; } server { listen 80; server_name otherexample.com www.otherexample.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/ssl/otherapp.crt; ssl_certificate_key /etc/ssl/otherapp.key; server_name otherexample.com www.otherexample.com; access_log /var/log/nginx/otherapp_access.log; error_log /var/log/nginx/other_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://myapp-site; proxy_redirect off; } location /static { root /www; expires 30d; add_header Cache-Control public; access_log off; } } Java errors: com.sun.deploy.net.FailedDownloadException: Unable to load resource: http://example.com:8080/FinalCutServer/FinalCutServer_mac.jnlp at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722) java.net.ConnectException: Operation timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) at sun.net.www.http.HttpClient.New(HttpClient.java:290) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doGetRequest(Unknown Source) at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • Performance Tuning a High-Load Apache Server

    - by futureal
    I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows: Debian Lenny (all stable packages + patched to security updates) Apache 2.2.9 PHP 5.2.6 Amazon EC2 large instance The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top. Further, running other things on the server during these times (even vi) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay. We have Apache configured as follows, using prefork: StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 And KeepAlive as: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong. Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of 15, our instance CPU utilization is between 75-80%. Example output from top: top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56 Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers Swap: 0k total, 0k used, 0k free, 3750664k cached The majority of the processes look like: 24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2 24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2 24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2 24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2 24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2 Example output from vmstat at the same time as the above: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9 6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15 23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0 23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0 20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0 6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1 0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24 2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5 9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2 3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0 And finally, output from Apache's server-status: Server uptime: 31 days 2 hours 18 minutes 31 seconds Total accesses: 60102946 - Total Traffic: 974.5 GB CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load 22.4 requests/sec - 380.3 kB/second - 17.0 kB/request 107 requests currently being processed, 6 idle workers C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK... KK_KWKKKWKCKCWKK.KKKCK.......................................... ................................................................ From my limited experience I draw the following conclusions/questions: We may be allowing far too many KeepAlive requests I do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive value myself yet.

    Read the article

  • DNS Server Behind NAT

    - by Bryan
    I've got a Bind 9 DNS server sitting behind a NAT firewall, assume the Internet facing IP is 1.2.3.4 There are no restrictions on outgoing traffic, and port 53 (TCP/UDP) is forwarded from 1.2.3.4 to the internal DNS server (10.0.0.1). There are no IP Tables rules on either the VPS or the internal Bind 9 server. From a remote Linux VPS located elsewhere on the internet, nslookup works fine # nslookup foo.example.com 1.2.3.4 Server: 1.2.3.4 Address: 1.2.3.4#53 Name: foo.example.com Addresss: 9.9.9.9 However, when using the host command on the remote VPS, I receive the following output: # host foo.example.com 1.2.3.4 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; connection timed out; no servers could be reached. From the VPS, I can establish a connection (using telnet) to 1.2.3.4:53 From the internal DNS server (10.0.0.1), the host command appears to be fine: # host foo.example.com 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: foo.example.com has address 9.9.9.9 Any suggestions as to why the host command on my VPS is complaining about the reply coming back from another port, and what can I do to fix this? Further info: From a windows host external to the network >nslookup foo.example.com 1.2.3.4 DNS request timeout timeout was 2 seconds Server: UnKnown Address: 1.2.3.4 DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds *** Request to UnKnown timed-out This is a default install of bind from Ubuntu 12.04 LTS, with around 11 zones configured. $ named -v BIND 9.8.1-P1 TCP Dump (filtered) from internal DNS server 20:36:29.175701 IP pc.external.com.57226 > dns.example.com.domain: 1+ PTR? 4.3.2.1.in-addr.arpa. (45) 20:36:29.175948 IP dns.example.com.domain > pc.external.com.57226: 1 Refused- 0/0/0 (45) 20:36:31.179786 IP pc.external.com.57227 > dns.example.com.domain: 2+[|domain] 20:36:31.179960 IP dns.example.com.domain > pc.external.com.57227: 2 Refused-[|domain] 20:36:33.180653 IP pc.external.com.57228 > dns.example.com.domain: 3+[|domain] 20:36:33.180906 IP dns.example.com.domain > pc.external.com.57228: 3 Refused-[|domain] 20:36:35.185182 IP pc.external.com.57229 > dns.example.com.domain: 4+ A? foo.example.com. (45) 20:36:35.185362 IP dns.example.com.domain > pc.external.com.57229: 4*- 1/1/1 (95) 20:36:37.182844 IP pc.external.com.57230 > dns.example.com.domain: 5+ AAAA? foo.example.com. (45) 20:36:37.182991 IP dns.example.com.domain > pc.external.com.57230: 5*- 0/1/0 (119) TCP Dump from client during query 21:24:52.054374 IP pc.external.com.43845 > dns.example.com.53: 6142+ A? foo.example.com. (45) 21:24:52.104694 IP dns.example.com.29242 > pc.external.com.43845: UDP, length 95

    Read the article

  • Local DNS server (bind) and the router DHCP

    - by Luca
    I just set up an internal http server for internal use (I set up Redmine), in a small network (30 or so PCs). I set up the http server on a virtual box ubuntu, that runs also the DNS server (bind). In the DNS lookup I added the Redmine server name (redmine.engserver <- 192.168.1.14) and as forwarders the outside ISP DNS IP adresses. I am using a small wi-fi router (ASUS RT-N66U) as DHCP (and as gateway). In the DHCP config page I set up as DNS the ubuntu server IP (it is fixed 192.168.1.14). Now when I connect a new PC to the network, the DHCP router issues its new IP and as DNS servers it issues: primary: 192.168.1.14 (ubuntu machine) and seconary 192.168.1.1 (the router itself). ipconfig /all Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 248539109 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-15-AA-3F-D0-67-E5-49-A7-EF DNS Servers . . . . . . . . . . . : 192.168.1.14 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Before changing the DHCP setting on the router, I would always get only one DNS server: 192.168.1.1 (which uses probably DNS forwarding to external public DNS services). The problem is this: If in my browser I type www.google.com, it works all the time. If in the browser I type http://redmine.engserver/ it works most of the time, but sometimes it ends up with a yahoo page search or something else. In the DNS cache it shows as (Server not found). ipconfig /displaydns I looked with wireshark and it seems like sometimes the client PC interrogates the secondary DNS (192.168.1.1) instead of the first 192.168.1.14. Obviously this one is a public domain and it does not have the redmine.engserver entry. What is wrong in this configuration? Is it even legitimate to have 2 DNS (one internal and one forwarded by the router) which are inconsistent? Is there another way to have a local name service in a small office network? Why is the router DHCP issuing itself as DNS?

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • How do I setup a secondary incoming mail server?

    - by abrahamvegh
    I currently have a server running Debian 6, with postfix and dovecot handling email. This server hosts email for a number of domains and users, so I use MySQL as my backing store for users and forwardings and everything related. Currently, this server is the only server listed in an MX record for all of the domains it serves. I would like to create a secondary server that would be listed in the DNS with a lower priority (e.g. current primary server is priority 5, secondary would be priority 10), so that in the event that I need to reboot the primary server, or otherwise make it unavailable, the secondary server would receive email, and hold it until the primary server came back up, at which point it would deliver any held email to the primary server. I do not need the secondary server to function as a backup sending server. Users would never need to see the secondary server, they would simply not lose incoming emails if the primary server is down, and they would be unable to send or receive until the primary came back up. How would I go about doing this? I would like to use the same software if they can handle this task, because I’m already familiar with managing them.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Deploying axis2 on glassfish

    - by user115524
    I'm trying to deploy Axis2 v1.6.2 war on Glassfish v3.1.2 and I'm having some problems... I need to develop a few web services and since the main app is beeing served on Glassfish I was hoping to deploy axis2 on it so I could test it. I used Glassfish administration pages to deploy a war downloaded from an apache site, but after pointing the application deployment form to this war I'm getting Error and this is the (long) stack trace: [#|2013-06-27T11:34:40.701+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|WebModule[/axis23403634363287739103]StandardWrapper.Throwable java.lang.ExceptionInInitializerError at org.apache.axis2.transport.http.AxisServlet.initConfigContext(AxisServlet.java:584) at org.apache.axis2.transport.http.AxisServlet.init(AxisServlet.java:454) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1453) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1250) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5093) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5380) at com.sun.enterprise.web.WebModule.start(WebModule.java:498) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:917) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:901) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:733) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2019) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1669) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:109) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) Caused by: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.apache.axis2.deployment.DeploymentEngine.(DeploymentEngine.java:76) ... 68 more |#] -- cut -- ... the end of the stacktrace: [#|2013-06-27T11:34:40.714+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while invoking class com.sun.enterprise.web.WebApplication start method java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at com.sun.enterprise.web.WebApplication.start(WebApplication.java:138) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) |#] [#|2013-06-27T11:34:40.715+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app|#] [#|2013-06-27T11:34:40.862+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.|#] [#|2013-06-27T11:34:40.875+0200|INFO|glassfish3.1.2|org.glassfish.admingui|_ThreadID=85;_ThreadName=admin-thread-pool-4848(5);|Exception Occurred :Error occurred during deployment: Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.. Please see server.log for more details.|#] Is it even possible to deploy axis2 on glassfish?

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLAuthority News – Community Service and Public Speaking Engagements

    - by pinaldave
    Today is the last day of the year and I was going over my memories for year 2010. Almost all of them are good and I feel for sure better person in terms of knowledge, nature and overall human being. Looking back at the year, it is very satisfying as I was able to go out in public and help community out at various capacity. Thought, most of the time my contribution was as speaker, many times, I have reached out and helped organized event and worked at any capacity to get the event out. I have taken parts in many TechEds, PASS events, Virtual Tech Days, Various Community Events around the Globe and my contribution is not limited to my country only. Overall – I feel good to be part of this wonderful and supportive community. SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010 SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010 SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010 SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010 SQL SERVER – Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology SQLAuthority News – Feedback Received for Virtual Tech Days Sessions on Spatial Database SQLAuthority News – Community Tech Days, Ahmedabad – July 24, 2010 SQLAuthority News – SQL Data Camp, Chennai, July 17, 2010 – A Huge Success SQLAuthority News – 2 Sessions at TechInsight 2010 – June 29 – July 1, 2010 SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch SQLAuthority News – Professional Development and Community SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Opportunity of A Lifetime SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience SQLAuthority News – Author Visit Review – TechMela Nepal – March 29-30, 2010 SQLAuthority News – Excellent Event – TechEd Sri Lanka – Feb 8, 2010 SQLAuthority News – Hyderabad Techies February Fever Feb 11, 2010 – Indexing for Performance SQLAuthority News – MUGH – Microsoft User Group Hyderabad – Feb 2, 2010 Session Review SQLAuthority News – Ahmedabad Community Tech Days – Jan 30, 2010 – Huge Success For earlier year’s contribution you can check my webpage over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Error starting modern compiler

    - by saloni
    In my servlet , I m using Tomcat 5.0 and JRE is 1.5.0 but it is giving error when I click on the URL . As when I created a war file of my project and deployed in tomcat than it is working fine . It means that only problem with my eclipse configuration ERROR IS : - Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Javac exception Error starting modern compiler at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:69) at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:942) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:764) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:382) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:472) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:451) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:439) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:511) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:295) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:61) ... 35 more Caused by: java.lang.VerifyError: class com.sun.tools.javac.jvm.Target overrides final method . at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:1634) at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:860) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1307) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1189) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at com.sun.tools.javac.Main.compile(Main.java:42) ... 40 more --- Nested Exception --- java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:61) at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:942) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:764) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:382) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:472) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:451) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:439) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:511) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:295) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.VerifyError: class com.sun.tools.javac.jvm.Target overrides final method . at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:1634) at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:860) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1307) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1189) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at com.sun.tools.javac.Main.compile(Main.java:42) ... 40 more Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Env: Compile: javaFileName=/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/work/Catalina/localhost/SampleSaloni//org/apache/jsp/page\form_jsp.java classpath=/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/classes/;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant-launcher.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-collections-3.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-dbcp-1.2.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-el.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-pool-1.2.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-compiler.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-runtime.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jsp-api.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-common.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-factory.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-java.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-resources.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/tools.jar;D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/classes/;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant-launcher.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-collections-3.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-dbcp-1.2.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-el.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-pool-1.2.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-compiler.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-runtime.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jsp-api.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-common.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-factory.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-java.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-resources.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/tools.jar;D:/software setups/jakarta-tomcat-5.0.28/common/classes/;D:/software setups/jakarta-tomcat-5.0.28/common/lib/ant-launcher.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/ant.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-collections-3.1.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-dbcp-1.2.1.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-el.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-pool-1.2.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jasper-compiler.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jasper-runtime.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jsp-api.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-common.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-factory.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-java.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-resources.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/servlet-api.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/tools.jar;/D:/software%20setups/jakarta-tomcat-5.0.28/bin/bootstrap.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/dnsns.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/sunjce_provider.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/sunpkcs11.jar cp=D:\software setups\jakarta-tomcat-5.0.28\bin\bootstrap.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\classes cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant-launcher.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-collections-3.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-dbcp-1.2.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-el.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-pool-1.2.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-compiler.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-runtime.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jsp-api.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-common.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-factory.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-java.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-resources.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\tools.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\classes cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant-launcher.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-collections-3.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-dbcp-1.2.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-el.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-pool-1.2.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-compiler.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-runtime.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jsp-api.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-common.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-factory.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-java.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-resources.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\tools.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\classes cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\ant-launcher.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\ant.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-collections-3.1.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-dbcp-1.2.1.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-el.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-pool-1.2.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jasper-compiler.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jasper-runtime.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jsp-api.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-common.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-factory.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-java.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-resources.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\servlet-api.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\tools.jar cp=D:\software%20setups\jakarta-tomcat-5.0.28\bin\bootstrap.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\dnsns.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\sunjce_provider.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\sunpkcs11.jar work dir=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni extension dir=C:\Program Files\Java\jre1.5.0_09\lib\ext srcDir=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni include=org/apache/jsp/page/form_jsp.java Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Error compiling file: /D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/work/Catalina/localhost/SampleSaloni//org/apache/jsp/page\form_jsp.java [javac] Compiling 1 source file

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • Setting up Django on an internal server (os.environ() not working as expected?)

    - by monkut
    I'm trying to setup Django on an internal company server. (No external connection to the Internet.) Looking over the server setup documentation it appears that the "Running Django on a shared-hosting provider with Apache" method seems to be the most-likely to work in this situation. Here's the server information: Can't install mod_python no root access Server is SunOs 5.6 Python 2.5 Apache/2.0.46 I've installed Django (and flup) using the --prefix option (reading again I probably should've used --home, but at the moment it doesn't seem to matter) I've added the .htaccess file and mysite.fcgi file to my root web directory as mentioned here. When I run the mysite.fcgi script from the server I get my expected output (the correct site HTML output). But, it won't when trying to access it from a browser. It seems that it may be a problem with the PYTHONPATH setting since I'm using the prefix option. I've noticed that if I run mysite.fcgi from the command-line without setting the PYTHONPATH enviornment variable it throws the following error: prompt$ python2.5 mysite.fcgi ERROR: No module named flup Unable to load the flup package. In order to run django as a FastCGI application, you will need to get flup from http://www.saddi.com/software/flup/ If you've already installed flup, then make sure you have it in your PYTHONPATH. I've added sys.path.append(prefixpath) and os.environ['PYTHONPATH'] = prefixpath to mysite.fcgi, but if I set the enviornment variable to be empty on the command-line then run mysite.fcgi, I still get the above error. Here are some command-line results: >>> os.environ['PYTHONPATH'] = 'Null' >>> >>> os.system('echo $PYTHONPATH') Null >>> os.environ['PYTHONPATH'] = '/prefix/path' >>> >>> os.system('echo $PYTHONPATH') /prefix/path >>> exit() prompt$ echo $PYTHONPATH Null It looks like Python is setting the variable OK, but the variable is only applicable inside of the script. Flup appears to be distributed as an .egg file, and my guess is that the egg implementation doesn't take into account variables added by os.environ['key'] = value (?) at least when installing via the --prefix option. I'm not that familiar with .pth files, but it seems that the easy-install.pth file is the one that points to flup: import sys; sys.__plen = len(sys.path) ./setuptools-0.6c6-py2.5.egg ./flup-1.0.1-py2.5.egg import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sy s.path[p:p]=new; sys.__egginsert = p+len(new) It looks like it's doing something funky, anyway to edit this or add something to my code so it will find flup?

    Read the article

  • Send all traffic over VPN connection not working Windows VPN host

    - by Adam Schiavone
    I am trying to get a mac (10.8) to connect to thru vpn to a server running Windows Server 2008 R2 pass all requests from the mac to the server. The VPN is setup and I can connect and access the server thru a web browser, but for all other sites, the DNS lookup fails. I have tried adding a DNS server to the VPN Host. ex. Lets say the the VPN server also hosts a website example.com. I connect to the VPN with my mac and point a browser to example.com and everything works fine. but when I point the browser to google.com it just sits there and will eventually come back with a DNS lookup failed message. HOWEVER: I tried running the command dig @myServersIpHere www.google.com. on the mac and it comes back with correct IP addresses. I really dont know what to do from here. How can I route all requests from my mac, thru my windows server via VPN?

    Read the article

  • Remote access not working without connected monitor

    - by winSharp93
    I am trying to configure a Windows Server 2008 as a Home Server for my personal use (mainly for storing documents, hosting source-control, etc.). The "server" consists of an Intel Atom 2700DC board and an Intel SSD. Configuring remote access to the server, I am confronted with a very strange problem: As long as a monitor is connected to my server, remote access works without any problems. However, when no monitor is connected at boot-time, remote access simply won't work (I keep getting errors when trying to connect that the remote server was not found or that remote access is disabled). Windows definitely boots when no monitor is connected as I receive a message asking me whether to enter safe mode when booting after powering the server down by plugging the power cord. When I plug in a monitor after boot, it stays turned off and remote desktop connections still fail. Do you have any ideas about what I could try?

    Read the article

  • Testing + production server and syncing MySQL data

    - by Matthew
    I have a web application running on LAMP with a testing server and a production server. Is there a standard practice for keeping the data on the testing server in sync with the production server? The data on the testing server gets out of date pretty quick and I feel like there must be an easier way than just dumping the production server and copying it onto the testing server every so often. It's not important that the data is in total sync, just that the testing server represents the production enviornment as accurately as possible.

    Read the article

  • SQL Replication (subsciber) can't connect to publication

    - by a3code
    I have 2 Virtual Machines, One with MS SQL server 2008 R2, other with MS SQL Server 2012 Express.... On 1 I have configuration for replication (publication), and I would like to setup Express version like subscriber. but I can't to connect to publisher SQL Server replication requires the actual server name to make a connection to the server. Specify the actual server name, 'XXXX'. (Replication.Utilities) I have tried to cheat and added XXXX server name to hosts file, but it dos't help. Additianlly I used to run http://www.hagrin.com/332/fixing-sql-server-replication-requires-actual-server-name-make-connection-server-error action for setup publication in correct way What I need to do for successful connection ?

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • How can I print a web page on a server?

    - by Gavin Schultz
    Suppose I develop a web page using the cool Google visualization API, and it does everything the user wants. They can the parameters, look at the graphs, and print the page to get a reasonable-looking report. All good. Now suppose I want to do the same thing server-side. For example, say we need a set of report generated at a specific time of day, printed to a PDF and emailed to a manager. It's not a user-initiated action, so we don't have a user's browser or their printer. We have a URL that would render the report if we had a browser, and that's it. Is there a good way to do this server-side? Is this just foolish? Has anyone done anything like that before? Do any of the major browsers have APIs that might provide such functionality? Keep in mind too that it's not just static HTML; probably javascript will be running first to shift the DOM around. I know we could implement a whole different reporting engine on the server side to do this, but that will (a) generate reports that look a bit different, and (b) require me to build/maintain two sets of functionality. Instead, I'd be happy if I could just render the page / pages I want in an invisible server-side browser and print it to a PDF (let's mostly ignore that step - I know any number of PDF printer drivers that could do this). I don't really want to do it ugly either - i.e. by starting a browser process and then sending keystrokes directly to the window either - that's just bound to fall apart with a slight nudge. The only related question I found had an answer like that. Any advice appreciated!

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >