Search Results

Search found 30471 results on 1219 pages for 'client side scripting'.

Page 183/1219 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?

    - by Aerovistae
    I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all is. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What is .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion.

    Read the article

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • How to configure a WCF service to only accept a single client identified by a x509 certificate

    - by Johan Levin
    I have a WCF client/service app that relies on secure communication between two machines and I want to use use x509 certificates installed in the certificate store to identify the server and client to each other. I do this by configuring the binding as <security authenticationMode="MutualCertificate"/>. There is only client machine. The server has a certificate issued to server.mydomain.com installed in the Local Computer/Personal store and the client has a certificate issued to client.mydomain.com installed in the same place. In addition to this the server has the client's public certificate in Local Computer/Trusted People and the client has the server's public certificate in Local Computer/Trusted People. Finally the client has been configured to check the server's certificate. I did this using the system.servicemodel/behaviors/endpointBehaviors/clientCredentials/serviceCertificate/defaultCertificate element in the config file. So far so good, this all works. My problem is that I want to specify in the server's config file that only clients that identify themselves with the client.mydomain.com certificate from the Trusted People certificate store are allowed to connect. The correct information is available on the server using the ServiceSecurityContext, but I am looking for a way to specify in app.config that WCF should do this check instead of my having to check the security context from code. Is that possible? Any hints would be appreciated. By the way, my server's config file looks like this so far: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="MyServer.Server" behaviorConfiguration="CertificateBehavior"> <endpoint contract="Contracts.IMyService" binding="customBinding" bindingConfiguration="SecureConfig"> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost/SecureWcf"/> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="CertificateBehavior"> <serviceCredentials> <serviceCertificate storeLocation="LocalMachine" x509FindType="FindBySubjectName" findValue="server.mydomain.com"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureConfig"> <security authenticationMode="MutualCertificate"/> <httpTransport/> </binding> </customBinding> </bindings> </system.serviceModel> </configuration>

    Read the article

  • Penetration testing with Nikto, unknown results found

    - by heldrida
    I've scanned my new webserver and I'm surprised to find that in the results there's programs that I never installed. This is a fresh new install of Ubuntu 12.04 and just installed Php 5.3, mysql, fail2ban, apache2, git, a few other things. Not sure if related, but I've got Wordpress installed but this doesn't have anything to do with myphpnuke does it? I'd like to understand why am I getting this results ? + OSVDB-27071: /phpimageview.php?pic=javascript:alert(8754): PHP Image View 1.0 is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=search&query=[script]alert('Vulnerable);[/script]?query=: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=MostPopular&ratenum=[script]alert(document.cookie);[/script]&ratetype=percent: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?op=modload&name=FAQ&file=index&myfaq=yes&id_cat=1&categories=%3Cimg%20src=javascript:alert(9456);%3E&parent_id=0: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?letter=%22%3E%3Cimg%20src=javascript:alert(document.cookie);%3E&op=modload&name=Members_List&file=index: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-4598: /members.asp?SF=%22;}alert('Vulnerable');function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-2946: /forum_members.asp?find=%22;}alert(9823);function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. Thanks for looking!

    Read the article

  • Wires from fan on side panel - where do they connect?

    - by Mick
    The side panel of my desktop PC has a small fan built in. There were two wires, one red one black that connected inside the PC. Unfortunately I had the bolts removed and the side door fell off ripping the cables such that the main length of cable remains attached to the fan. So there must be two short stubs of broken cable somewhere - but I can't find them! Any ideas where I should look? I can't see anything on the motherboard. the PC is running perfectly well - but I now have the side open to make sure the air can circulate. EDIT: The motherboard appears to be an "Asus P6T SE" EDIT: The manual shows the location of two "chassis fan" connectors. One is being used by a big chassis fan on the back face of the PC (underneath the pwer supply). The other appears to be unused - there is no stub of a connector with wires - its just the bare pins sticking directly out of the motherboard. There is also an unused "pwr_fan" connector. Would it be ok to simply connect the black and red wires from the fan to the black and red parts of a molex adapter?

    Read the article

  • Could not find DHCP daemon to get information on Belkin G Wifi Router

    - by Anirudh Goel
    I am using a Belkin G Wireless Router F5D7234, and i have a DSL connection with only a ethernet cable. So i connected the cable to the Modem port and allowed it to use Dyanmic IP, it worked successfully and an ip was assigned to it and multiple computers could connect to it and browse. But after some time the power went off and after then upon rebooting it is taking about half hour to get an IP address. Upon seeing the log i see this entry repeatedly 07/02/2010 23:22:34 DHCP Client: [WAN]Could not find DHCP daemon to get information 07/02/2010 23:22:32 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:30 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:28 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:26 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:26 DHCP Client: [WAN]Could not find DHCP daemon to get information 07/02/2010 23:22:24 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:22 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:20 DHCP Client: [WAN]Send Discover 07/02/2010 23:22:18 DHCP Client: [WAN]Send Discover Any idea what i can do? I tried using another belkin router of same model and make and there also i faced the same problem.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • Invalid SSH key error in juju when using it with MAAS

    - by Captain T
    This is the output of juju from a clean install with 2 nodes all running 12.04 juju bootstrap - finishes with no errors and allocates the machine to the user but still no joy after juju environment-destroy and rebuild with different users and different nodes. root@cloudcontrol:/storage# juju -v status 2012-06-07 11:19:47,602 DEBUG Initializing juju status runtime 2012-06-07 11:19:47,621 INFO Connecting to environment... 2012-06-07 11:19:47,905 DEBUG Connecting to environment using node-386077143930... 2012-06-07 11:19:47,906 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="node-386077143930" remote_port="2181" local_port="57004". The authenticity of host 'node-386077143930 (10.5.5.113)' can't be established. ECDSA key fingerprint is 31:94:89:62:69:83:24:23:5f:02:70:53:93:54:b1:c5. Are you sure you want to continue connecting (yes/no)? yes 2012-06-07 11:19:52,102 ERROR Invalid SSH key 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@662: Client environment:host.name=cloudcontrol 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@679: Client environment:user.name=sysadmin 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@699: Client environment:user.dir=/storage 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:57004 sessionTimeout=10000 watcher=0x7feb11afc6b0 sessionId=0 sessionPasswd=<null> context=0x2dc7d20 flags=0 2012-06-07 11:19:52,429:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-06-07 11:19:55,765:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I have tried numerous ways of creating the keys with ssh-keygen -t rsa -b 2048, ssh-keygen -t rsa, ssh-keygen, and i have tried adding those to MAAS web config page, but always get the same result. I have added the appropriate public key afterwards to the ~/.ssh/authorized_keys I can also ssh to the node, but as I have not been asked to give it a user name or password or set up any sort of account, I cannot manually ssh into the node. The setup of the node is all handled by maas server. It seems like a simple error of looking at the wrong key or looking in the wrong places, only other suggestions I can find are to destroy the environment and rebuild (but that didn't work umpteen times now) or leave it to build the instance once the node has powered up, but I have left for a few hours, and left overnight to build with no luck.

    Read the article

  • apache mod_jk loadbalancing issue for glassfish cluster instances

    - by SibzTer
    I have a JEE ear application deployed on 2 clusters with 2 instances each on Glassfish v3.1. These are load balanced by an Apache server running on the same machine. My problem is that I am frequently seeing the following error messages frequently in the mod_jk.log file. Can you help me understand what the issue is? [Mon Jun 13 09:37:51 2011] [7116:7852] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:51 2011] [7116:7852] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:37:51 2011] loadbalancerLocal myServer 0.062500 [Mon Jun 13 09:37:51 2011] [7116:6512] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:51 2011] [7116:6512] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:37:52 2011] [7116:3080] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:52 2011] [7116:3080] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:38:21 2011] [7116:6512] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:21 2011] [7116:7852] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:21 2011] [7116:6512] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Mon Jun 13 09:38:21 2011] [7116:7852] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 29.046875 [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 29.171875 [Mon Jun 13 09:38:21 2011] [7116:6512] [info] jk_handler::mod_jk.c (2620): Aborting connection for worker=loadbalancerLocal [Mon Jun 13 09:38:21 2011] [7116:7852] [info] jk_handler::mod_jk.c (2620): Aborting connection for worker=loadbalancerLocal [Mon Jun 13 09:38:21 2011] [7116:7852] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:38:21 2011] [7116:7852] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 0.156250 [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 0.062500 [Mon Jun 13 09:38:22 2011] [7116:3080] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:22 2011] [7116:3080] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance.

    Read the article

  • How do I install the Cisco Anyconnect VPN client?

    - by chuck
    I installed Cisco AnyConnect for Ubuntu(64) 12.04, but it failed. It can be installed on Ubuntu 10.10(64). The error log Installing Cisco AnyConnect VPN Client ... Extracting installation files to /tmp/vpn.teuSIr/vpninst096243274.tgz... Unarchiving installation files to /tmp/vpn.teuSIr... Starting the VPN agent... /opt/cisco/vpn/bin/vpnagentd: error while loading shared libraries: libxml2.so.2: cannot open shared object file: No such file or directory When I meet that, locate libxml2.so.2 /usr/lib/x86_64-linux-gnu/libxml2.so.2 /usr/lib/x86_64-linux-gnu/libxml2.so.2.7.8 So I create symbol link libxml2.so.2 in /user/lib and after I do: Installing Cisco AnyConnect VPN Client ... Extracting installation files to /tmp/vpn.5cz4FV/vpninst001442979.tgz... Unarchiving installation files to /tmp/vpn.5cz4FV... Starting the VPN agent... /opt/cisco/vpn/bin/vpnagentd: error while loading shared libraries: libxml2.so.2: wrong ELF class: ELFCLASS64 I ensure that there exist lib32 runtime lib on my device. How can I fix this?

    Read the article

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • TeamSpeak 3 Disconnects

    - by ArchUser
    I've recently had a few random TS3 mass disconnects and I'm am curious to know where I may find any applications that can help me determine the cause of any types of TS3 server disconnections as we plan on having many more users in the future. I run an almost empty VPS (OpenVZ) server with an ArchLinux template on it. I have 1.5/2GB of RAM, 2GHz of CPU and plenty of hard drive space, to run for the most part, just my TS3 and a low traffic apache web server. This is what I am investigating. 2011-02-04 06:07:05.130343|INFO |VirtualServer | 1| client disconnected 'Valamoor'(id:224) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.131338|INFO |VirtualServer | 1| client disconnected 'Kevrow'(id:19? reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.191849|INFO |VirtualServer | 1| client disconnected 'scuba'(id:200) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.192633|INFO |VirtualServer | 1| client disconnected '[Ash] Setna'(id:75) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.193350|INFO |VirtualServer | 1| client disconnected 'Akiris'(id:254) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.194047|INFO |VirtualServer | 1| client disconnected 'Marcus'(id:25? reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.194726|INFO |VirtualServer | 1| client disconnected 'Guthry'(id:275) reason 'reasonmsg=connection lost' 2011-02-04 07:18:50.327071|INFO |VirtualServer | 1| client disconnected 'Valamoor'(id:224) reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.339018|INFO |VirtualServer | 1| client disconnected 'Marcus'(id:25? reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.339870|INFO |VirtualServer | 1| client disconnected '[Ash] Setna'(id:75) reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.340515|INFO |VirtualServer | 1| client disconnected 'Guthry'(id:275) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.797353|INFO |VirtualServer | 1| client disconnected 'JohnyRingo'(id:240) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.798517|INFO |VirtualServer | 1| client disconnected 'Maloo roots'(id:196) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.799314|INFO |VirtualServer | 1| client disconnected 'Cpt dravyn'(id:234) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.839254|INFO |VirtualServer | 1| client disconnected 'scuba'(id:200) reason 'reasonmsg=connection lost' etc... I need to determine if it is my hosting provider or my server, and what tools I can use to determine the issues. My VPS host has told me this... "I checked out the node that your VPS runs on and there is no abnormal system load, or I/O wait from the drive. I also checked the bandwidth history from the server and there have been no spikes or outages."

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • My hosting server giving memory allocation error

    - by Usman
    I have hosted my website on shared hosting linux server. As there are atmost 10-15 visitors come to my website daily but My wordpress website most of the times gives 500 Internal Server Error. I accessed my server Error Log following error is showing: [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:32 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:31 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:26 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php Is my hosting service really bad. And any solution. Thanks in advance.

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • How does the timeout work in Restlet's client class?

    - by Greg Noe
    Here's some code: Client client = new Client(Protocol.HTTP); client.setConnectTimeout(1); //milliseconds Response response = client.post(url, paramRepresentation); System.out.println("timed out"); What I would expect to happen is that it prints "timed out" before the resource has time to process. Instead, nothing happens with the timeout and it doesn't print "timed out" until after the resource returns. Even if I put a Thread.sleep(5000) at the resource that's handling the request, the entire sleep is performed, like the timeout did nothing. Anyone have experience with this? I'm using Restlet 1.1.1. Thanks.

    Read the article

  • how can I send messages to/from a Websphere Message Broker from an embedded C client (no JVM)?

    - by queBurro
    What are my options for pubsubing (or point to point but pubsub is better) messages to and from an IBM message broker from an embedded headless C/C++ linux client that doesn't have a JVM? Ideally we want large file transfer (2GB once per day off of the client) encryption (SSL) reliable ('assured' delivery / QoS2, maybe QoS1 would do) The client in question currently only has exes and some bash scripts, I've been playing with MQTTv3 and RSMB, but for that I'd have to chomp the large files up (and reassemble back home) and I don't want to get into that if there's a transport that will do this for me? I've looked at MQTTv5 (but our client's got no JVM); JMS (no JVM) and XMS? which again looks like it gives me a C API but then needs the JVM to be installed on the client (or am I wrong?) any clues or hints would be appreciated, cheers

    Read the article

  • Why does Indy 10's echo server have high CPU usage when the client disconnects?

    - by Virtuo
    When I disconnect echo client like : EchoClient1.Disconnect; client disconnects fine... but EchoServer does NOT EVEN register client disconnection and it ends up with high process usage !?!? in every example and every doc it says that calling EchoClient.Disconnect is sufficient ! anyone, any idea ? (I am working in Win7, cloud that be a problem ?) Server code : procedure TForm2.EServerConnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client connected !'); end; procedure TForm2.EServerDisconnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client disconnected !'); end; problem is "TForm2.EServerDisconnect" never executes !?!?

    Read the article

  • How to provide an API client with 1,000,000 database results?

    - by Chris Dutrow
    What is a good way to provide an API client with 1,000,000 database results? We are cureently using PostgreSQL. A few suggested methods: Paging using Cursors Paging using random numbers ( Add "GREATER THAN ORDER BY " to each query ) Save information to a file and let the client download it Iterate through results, then POST the data to the client server Return only keys to the client, then let the client request the objects from Cloud files like Amazon S3 (still may require paging just to get the file names ). What haven't I thought of that is stupidly simple and way better than any of these options?

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • VPN, Tunneling to hide 'real IP' through my proxy server while showing the client IP on 'real' server side

    - by mickula
    I would like to hide my 'main server' behind the load balancer, call it 'proxy server' Although I use some closed-source software on 'main server' and it needs the client IP address to operate well. When I'm setting up some VPN connection, in that software it displays the IP address of my 'proxy server'. Is there any option to set up such tunneling or vpn to: not reveal IP of 'main server' show the IP of 'client' in 'application' on 'main server' I will be grateful for all your replies and ideas.

    Read the article

  • BlazeDS StreamingAMF: How to detect when flex client closes the connection?

    - by Adrian Pirvulescu
    Hello, I have a Flex application that connects to a BlazeDS server using the StreamingAMF channel. On the server-side the logic is handled by a custom adapter that extends ActionScriptAdapter and implements FlexSessionListener and FlexClientListener interfaces. I am asking how can I detect which "flex-client" has closed a connection when for example the user is closing the browser? (so I can clean some infos inside the database) I tried using the following: 1. To manually manage the command messages: @Override public Object manage(final CommandMessage commandMessage) { switch (commandMessage.getOperation()) { case CommandMessage.SUBSCRIBE_OPERATION: System.out.println("SUBSCRIBE_OPERATION = " + commandMessage.getHeaders()); break; case CommandMessage.UNSUBSCRIBE_OPERATION: System.out.println("UNSUBSCRIBE_OPERATION = " + commandMessage.getHeaders()); break; } return super.manage(commandMessage); } But the clientID's are always different from the ones that came. 2. Listening for sessionDestroyed and clientDestroyed events @Override public void clientCreated(final FlexClient client) { client.addClientDestroyedListener(this); System.out.println("clientCreated = " + client.getId()); } @Override public void clientDestroyed(final FlexClient client) { System.out.println("clientDestroyed = " + client.getId()); } @Override public void sessionCreated(final FlexSession session) { System.out.println("sessionCreated = " + session.getId()); session.addSessionDestroyedListener(this); } @Override public void sessionDestroyed(final FlexSession session) { System.out.println("sessionDestroyed = " + session.getId()); } But those sessionDestroyed and clientDestroyed methods are never called. :(

    Read the article

  • TeamCity Scheduled Build not getting all files from VSS

    - by Kate
    Within TeamCity if I trigger a build it all works correctly, however if the Scheduler triggers a build it does not seem to get all the files from VSS. I have clean checkout directory turned on, so I am not sure how it determines the patch for the VSS root. Does anyone have any suggestions on how I can get it to always get all files, and create a new patch each time? I have put the start of two build logs below, as you can see the first one has the correct 249mb, whereas the second only transfers 2MB. The files it doesn't get from VSS seem sporadic and not in relation to what has changed. Manual Trigger [23:57:49]: Checking for changes [00:09:04]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [00:09:04]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [00:09:05]: Checkout directory: C:\Builds\Ab 2.0 [00:09:05]: Updating sources: server side checkout... (24m:53s) [00:09:05]: [Updating sources: server side checkout...] Will perform clean checkout [00:09:05]: [Updating sources: server side checkout...] Clean checkout reasons [00:09:05]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [00:09:05]: [Clean checkout reasons] "Clean all files before build" turned on [00:09:05]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [00:09:42]: [Updating sources: server side checkout...] Building incremental patch over the cached patch [00:31:50]: [Updating sources: server side checkout...] Transferring repository sources: 124.0Mb so far... [00:32:18]: [Updating sources: server side checkout...] Repository sources transferred: 249.46Mb total [00:32:18]: [Updating sources: server side checkout...] Average transfer speed: 183.40Kb per second Triggered by the Scheduler [07:45:01]: Checking for changes [07:55:09]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [07:55:22]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [07:55:22]: Checkout directory: C:\Builds\Ab 2.0 [07:55:22]: Updating sources: server side checkout... (24m:24s) [07:55:22]: [Updating sources: server side checkout...] Will perform clean checkout [07:55:22]: [Updating sources: server side checkout...] Clean checkout reasons [07:55:22]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [07:55:22]: [Clean checkout reasons] "Clean all files before build" turned on [07:55:22]: [Updating sources: server side checkout...] Building clean patch for VCS root: Ab 2.0 [08:19:46]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [08:19:47]: [Updating sources: server side checkout...] Repository sources transferred: 2.01Mb total

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >