Search Results

Search found 56465 results on 2259 pages for 'http status 401'.

Page 270/2259 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • 550 operation not permitted using FTP

    - by monkey_boys
    I'm using FTP to manage some files on a site I run but keep seeing this (truncated) error log: Command: DELE calendarpermission.php Response: 550 calendarpermission.php: Operation not permitted [...] Command: DELE button_down.gif Response: 550 button_down.gif: Operation not permitted Command: CWD /domains/example.com/public_html/admincp Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html/admincp" is the current directory Command: RMD control_examples Response: 550 control_examples: Operation not permitted Command: CWD /domains/example.com/public_html Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html" is the current directory Command: RMD admincp Response: 550 admincp: Operation not permitted Status: Retrieving directory listing... Command: PASV Response: 227 Entering Passive Mode (122,155,5,50,138,244). Command: MLSD Response: 150 Opening ASCII mode data connection for MLSD Response: 226 Transfer complete Status: Directory listing successful Status: Set permissions of '/domains/example.com/public_html/admincp' to '777' Command: SITE CHMOD 777 admincp Response: 550 CHMOD 777 admincp: Operation not permitted What do I do to solve this?

    Read the article

  • VMware ESX 3.5 Host Health shown as unknown

    - by dunxd
    I have an ESX 3.5 update 5 cluster of five host servers, all fully patched as of this Friday. Today I noticed that one of the servers has the Hardware Health status as unknown in Virtual Center Infrastructure Client. When I look at the Health Status view under configuration for that host, all the items are status Unknown. The server is exactly the same configuration as the others - same model (HP DL360 G5), memory, NICs etc. I have tried restarting the management service with service mgmt-vmware restart but this has not resolved the issue. Asides from this, I am not seeing any issues with the cluster - however, I hate having a blind spot like this. Any ideas?

    Read the article

  • vim: sending tab-completion key against a mapped keystroke

    - by CDR
    To switch between buffers without installing any plugins, a good way is to type :b <tab> Which shows all the current buffers names in status bar and you can pick one using cursor keys and enter. But :b <tab> is 5 keystrokes and I would like to map it to a <leader>. But setting the following is not working. :nnoremap <Leader>. :b <Tab> It shows ":b ^I" in status bar and doesn't actually open the buffer names on status bar. Anyone knows why?

    Read the article

  • Debian - Can't stop MySQL; permissions?

    - by anon
    I just tried to upgrade from debian squeeze to unstable by replacing 'squeeze' with 'unstable' in /etc/apt/sources.list. The upgrade went smoothly except for mysql, which failed because it couldn't stop mysql. /etc/init.d/mysql stop simply returns that it failed, but if I try to get the status with /etc/init.d/mysql status it gives me this error: me@debian:~$ sudo /etc/init.d/mysql status /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)' . mysql is running fine, and I checked the permissions for debian-sys-maint in phpmyadmin and it's allowed to do everything, but only connect from 'localhost.'

    Read the article

  • init.d service died

    - by jerluc
    Adapting some code from a linux forum, I've added a service script to /etc/init.d on my ubuntu natty server to start/stop/restart node.js It literally was working the first day I made it, but then today, after viewing my website this morning, the server threw a 404, and upon further inspection, the node.js process was gone. So I went to start the service again, only this time, node.js didn't start at all, and ever since I haven't been able to get my service script working. Below is the entire script: #!/bin/sh # # Node Server Startup # case "$1" in start) echo -n "Starting node: " daemon node /usr/local/www/server.js echo touch /var/lock/subsys/node ;; stop) echo -n "Shutting down node: " killall node echo rm -f /var/lock/subsys/node rm -f /var/run/node.pid ;; status) status node ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading node: " killall node -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 Thanks for any help!

    Read the article

  • Can't connect to service on ubuntu?

    - by user36914
    I have a service i just installed on a ubuntu workstation machine. Say its running under port 511. I can connect locally : telnet localhost 511 When i try to connect from a remote machine it fails telnet 192.168.0.1 511 Whats wierd is i tried to connect locally using the ip address and i get the following error: "Unable to connect to remote host: Connection Refused" I checked the status of the firewall: sudo ufw status and get back "status:inactive" So does anyone know why i can't connect remotely since the firewall is disabled and why can't i connect locally using its ip address. I don't know if this matters but its running under esxi.

    Read the article

  • how to stop deferred emails

    - by Will K
    I have a postfix mail gateway. At the same time, every other host is set to use this gateway as the relay. We have some automated outgoing emails sent from some hosts. I believe the gateway trys to send a deferred status back to the system started this. But that system is a null client, which sends but not receive any email Is there anyway to stop sending the deferred status? e.g. postfix/smtp[35725]: 2F6A155C256: to=, relay=none, delay=260862, delays=260862/0.01/0/0, dsn=4.4.1, status=deferred (connect to orange.mydom.com[192.168.1.5]:25: Connection refused) Thanks

    Read the article

  • Primefaces p:fileupload component problem

    - by Nitesh Panchal
    Hello, I am using Primefaces 2.0.1 but the FileUpload component is not working properly. It uses JQuery uploadify behind the scenes. This is my web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <filter> <filter-name>PrimeFaces FileUpload Filter</filter-name> <filter-class>org.primefaces.webapp.filter.FileUploadFilter</filter-class> </filter> <filter-mapping> <filter-name>PrimeFaces FileUpload Filter</filter-name> <servlet-name>Faces Servlet</servlet-name> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.primefaces.resource.ResourceServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/primefaces_resource/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.jsf</welcome-file> </welcome-file-list> </web-app> This is my index.xhtml :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.prime.com.tr/ui"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form prependId="false"> <h:commandButton actionListener="#{NewJSFManagedBean.add}" value="add"/> <p:fileUpload auto="false" widgetVar="fileUpl" fileUploadListener="#{NewJSFManagedBean.saveFile}"/> </h:form> </h:body> </html> I have following libraries in my classpath :- primefaces 2.0.1 commons-beanutils commons-beanutils-bean-collection commons-digestor commons-fileUpload commons-io commons-logging jhighlight The file gets correctly uploaded in /tmp but in browser it always says HTTP error. Please help me. It used to work till yesterday. But today i did a fresh installation of Glassfish and it has stopped working.

    Read the article

  • kSOAP2 and SOAPAction on Android

    - by Matze
    Hi everyone, I am trying to access a Webservice with kSOAP2 on an Android Phone. I think the connection is being established, but the server won't answer my request since I'm not providing a SOAP Action Header which seems to be required in SOAP Version 1.1(please correct me if I'm wrong here) which I have to use since the server does not support Version 1.2 . The concrete Faultcode which is returning in the request looks like this: faultactor null faultcode "S:Server" (id=830064966432) faultstring "String index out of range: -11" (id=830064966736) The errorcode which is generated on the server (I'm running it on a localhost) looks like this: 4.05.2010 20:20:29 com.sun.xml.internal.ws.transport.http.HttpAdapter fixQuotesAroundSoapAction WARNUNG: Received WS-I BP non-conformant Unquoted SoapAction HTTP header: http://server.contextlayer.bscwi.de/createContext 24.05.2010 20:20:29 com.sun.xml.internal.ws.server.sei.EndpointMethodHandler invoke SCHWERWIEGEND: String index out of range: -11 java.lang.StringIndexOutOfBoundsException: String index out of range: -11 at java.lang.String.substring(Unknown Source) at de.bscwi.contextlayer.xml.XmlValidator.isValid(XmlValidator.java:41) at de.bscwi.contextlayer.server.ContextWS.createContext(ContextWS.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.xml.internal.ws.api.server.InstanceResolver$1.invoke(Unknown Source) at com.sun.xml.internal.ws.server.InvokerTube$2.invoke(Unknown Source) at com.sun.xml.internal.ws.server.sei.EndpointMethodHandler.invoke(Unknown Source) at com.sun.xml.internal.ws.server.sei.SEIInvokerTube.processRequest(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Unknown Source) at com.sun.xml.internal.ws.server.WSEndpointImpl$2.process(Unknown Source) at com.sun.xml.internal.ws.transport.http.HttpAdapter$HttpToolkit.handle(Unknown Source) at com.sun.xml.internal.ws.transport.http.HttpAdapter.handle(Unknown Source) at com.sun.xml.internal.ws.transport.http.server.WSHttpHandler.handleExchange(Unknown Source) at com.sun.xml.internal.ws.transport.http.server.WSHttpHandler.handle(Unknown Source) at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source) at sun.net.httpserver.AuthFilter.doFilter(Unknown Source) at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source) at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(Unknown Source) at com.sun.net.httpserver.Filter$Chain.doFilter(Unknown Source) at sun.net.httpserver.ServerImpl$Exchange.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) The relevant part of the WSDL (at least that's what I'm thinking) looks like this: <operation name="createContext"> <soap:operation soapAction=""/> - <input> <soap:body use="literal" namespace="http://server.contextlayer.bscwi.de/"/> </input> - <output> <soap:body use="literal" namespace="http://server.contextlayer.bscwi.de/"/> </output> </operation> In my code I'm adding a Header, but it seems like I'm doing it wrong: private static final String SOAP_ACTION = ""; //... SoapSerializationEnvelope soapEnvelope = new SoapSerializationEnvelope (SoapEnvelope.VER11); soapEnvelope.setOutputSoapObject(Request); AndroidHttpTransport aht = new AndroidHttpTransport (URL); //... aht.call(SOAP_ACTION, soapEnvelope); SoapPrimitive resultString = (SoapPrimitive) soapEnvelope.getResponse(); Any advice would be great since I'm running out of ideas.. Thanks folks!

    Read the article

  • Parameter index is out of range

    - by czuroski
    Hello, I am getting the following error when trying to update an object using nhibernate. I am attempting to update a field which is a foreign key. Any thoughts why I might be getting this error? I can't figure it out from that error and my log4net log doesn't give any hints either. Thanks System.IndexOutOfRangeException was unhandled by user code Message="Parameter index is out of range." Source="MySql.Data" StackTrace: at MySql.Data.MySqlClient.MySqlParameterCollection.CheckIndex(Int32 index) at MySql.Data.MySqlClient.MySqlParameterCollection.GetParameter(Int32 index) at System.Data.Common.DbParameterCollection.System.Collections.IList.get_Item(Int32 index) at NHibernate.Type.Int32Type.Set(IDbCommand rs, Object value, Int32 index) at NHibernate.Type.NullableType.NullSafeSet(IDbCommand cmd, Object value, Int32 index) at NHibernate.Type.NullableType.NullSafeSet(IDbCommand st, Object value, Int32 index, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.Dehydrate(Object id, Object[] fields, Object rowId, Boolean[] includeProperty, Boolean[][] includeColumns, Int32 table, IDbCommand statement, ISessionImplementor session, Int32 index) at NHibernate.Persister.Entity.AbstractEntityPersister.Update(Object id, Object[] fields, Object[] oldFields, Object rowId, Boolean[] includeProperty, Int32 j, Object oldVersion, Object obj, SqlCommandInfo sql, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.UpdateOrInsert(Object id, Object[] fields, Object[] oldFields, Object rowId, Boolean[] includeProperty, Int32 j, Object oldVersion, Object obj, SqlCommandInfo sql, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.Update(Object id, Object[] fields, Int32[] dirtyFields, Boolean hasDirtyCollection, Object[] oldFields, Object oldVersion, Object obj, Object rowId, ISessionImplementor session) at NHibernate.Action.EntityUpdateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() at DataAccessLayer.NHibernateDataProvider.UpdateItem_temp(items_temp item_temp) in C:\Documents and Settings\user\My Documents\Visual Studio 2008\Projects\mySolution\DataAccessLayer\NHibernateDataProvider.cs:line 225 at InventoryDataClean.Controllers.ImportController.Edit(Int32 id, FormCollection formValues) in C:\Documents and Settings\user\My Documents\Visual Studio 2008\Projects\mySolution\InventoryDataClean\Controllers\ImportController.cs:line 101 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: Here is my item mapping - <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.items_temp, DataTransfer" table="items_temp"> <id name="id" unsaved-value="any" > <generator class="assigned"/> </id> <property name="assetid"/> <property name="description"/> <property name="caretaker"/> <property name="category"/> <property name="status" /> <property name="vendor" /> <many-to-one name="statusName" class="status" column="status" /> </class> </hibernate-mapping> Here is my status mapping - <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.status, DataTransfer" table="status"> <id name="id" unsaved-value="0"> <generator class="assigned"/> </id> <property name="name"/> <property name="def"/> </class> </hibernate-mapping> and here is my update function - public void UpdateItem_temp(items_temp item_temp) { ITransaction t = _session.BeginTransaction(); try { _session.SaveOrUpdate(item_temp); t.Commit(); } catch (Exception) { t.Rollback(); throw; } finally { t.Dispose(); } }

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • Replace JBoss error page with Axis2 fault XML response

    - by Dario
    I'm developing a webservice with Axis2 1.4.1 on JBoss 4.2.3/Tomcat 5.5.27 and Java 1.5.0 (15-b04). It works flawlessly but when an exception happens I get a JBoss error 500 HTML page instead of an Axis2 XML/SOAP fault. This behavoir is vexing, because it difficults to handle errors in the webservice client or in SoapUI while developing. Can I change this to get the SOAP fault? Maybe it's just an Axis2 or JBoss parameter, but I didn't find any clue about. EDIT: Here goes the new stacktrace: [ERROR] WSDoAllReceiver: security processing failed org.apache.axis2.AxisFault: WSDoAllReceiver: security processing failed at org.apache.rampart.handler.WSDoAllReceiver.processBasic(WSDoAllReceiver.java:214) at org.apache.rampart.handler.WSDoAllReceiver.processMessage(WSDoAllReceiver.java:86) at org.apache.rampart.handler.WSDoAllHandler.invoke(WSDoAllHandler.java:72) at org.apache.axis2.engine.Phase.invoke(Phase.java:317) at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:264) at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:163) at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275) at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:133) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:875) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:595) Caused by: org.apache.ws.security.WSSecurityException: The security token could not be authenticated or authorized at org.apache.ws.security.processor.UsernameTokenProcessor.handleUsernameToken(UsernameTokenProcessor.java:155) at org.apache.ws.security.processor.UsernameTokenProcessor.handleToken(UsernameTokenProcessor.java:53) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:311) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:228) at org.apache.rampart.handler.WSDoAllReceiver.processBasic(WSDoAllReceiver.java:211) ... 23 more [ERROR] Servlet.service() para servlet AxisServlet lanzó excepción java.lang.NullPointerException at org.apache.rampart.RampartMessageData.<init>(RampartMessageData.java:308) at org.apache.rampart.MessageBuilder.build(MessageBuilder.java:61) at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:64) at org.apache.axis2.engine.Phase.invoke(Phase.java:317) at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:264) at org.apache.axis2.engine.AxisEngine.sendFault(AxisEngine.java:520) at org.apache.axis2.transport.http.AxisServlet.handleFault(AxisServlet.java:416) at org.apache.axis2.transport.http.AxisServlet.processAxisFault(AxisServlet.java:379) at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:167) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:875) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:595) EDIT 2: After giving the bounty I found that I was wrong about 1.2.9-SNAPSHOT version of Axiom. I built it again, made sure the jars where correctly copied to lib directory and it worked! Finally, it was an Axiom bug, as said in the links provided by Vineet. Thanks!

    Read the article

  • CryptoExcercise Encryption/Decryption Problem

    - by venkat
    I am using apples "cryptoexcercise" (Security.Framework) in my application to encrypt and decrypt a data of numeric value. When I give the input 950,128 the values got encrypted, but it is not getting decrypted and exists with the encrypted value only. This happens only with the mentioned numeric values. Could you please check this issue and give the solution to solve this problem? here is my code (void)testAsymmetricEncryptionAndDecryption { uint8_t *plainBuffer; uint8_t *cipherBuffer; uint8_t *decryptedBuffer; const char inputString[] = "950"; int len = strlen(inputString); if (len > BUFFER_SIZE) len = BUFFER_SIZE-1; plainBuffer = (uint8_t *)calloc(BUFFER_SIZE, sizeof(uint8_t)); cipherBuffer = (uint8_t *)calloc(CIPHER_BUFFER_SIZE, sizeof(uint8_t)); decryptedBuffer = (uint8_t *)calloc(BUFFER_SIZE, sizeof(uint8_t)); strncpy( (char *)plainBuffer, inputString, len); NSLog(@"plain text : %s", plainBuffer); [self encryptWithPublicKey:(UInt8 *)plainBuffer cipherBuffer:cipherBuffer]; NSLog(@"encrypted data: %s", cipherBuffer); [self decryptWithPrivateKey:cipherBuffer plainBuffer:decryptedBuffer]; NSLog(@"decrypted data: %s", decryptedBuffer); free(plainBuffer); free(cipherBuffer); free(decryptedBuffer); } (void)encryptWithPublicKey:(uint8_t *)plainBuffer cipherBuffer:(uint8_t *)cipherBuffer { OSStatus status = noErr; size_t plainBufferSize = strlen((char *)plainBuffer); size_t cipherBufferSize = CIPHER_BUFFER_SIZE; NSLog(@"SecKeyGetBlockSize() public = %d", SecKeyGetBlockSize([self getPublicKeyRef])); // Error handling // Encrypt using the public. status = SecKeyEncrypt([self getPublicKeyRef], PADDING, plainBuffer, plainBufferSize, &cipherBuffer[0], &cipherBufferSize ); NSLog(@"encryption result code: %d (size: %d)", status, cipherBufferSize); NSLog(@"encrypted text: %s", cipherBuffer); } (void)decryptWithPrivateKey:(uint8_t *)cipherBuffer plainBuffer:(uint8_t *)plainBuffer { OSStatus status = noErr; size_t cipherBufferSize = strlen((char *)cipherBuffer); NSLog(@"decryptWithPrivateKey: length of buffer: %d", BUFFER_SIZE); NSLog(@"decryptWithPrivateKey: length of input: %d", cipherBufferSize); // DECRYPTION size_t plainBufferSize = BUFFER_SIZE; // Error handling status = SecKeyDecrypt([self getPrivateKeyRef], PADDING, &cipherBuffer[0], cipherBufferSize, &plainBuffer[0], &plainBufferSize ); NSLog(@"decryption result code: %d (size: %d)", status, plainBufferSize); NSLog(@"FINAL decrypted text: %s", plainBuffer); } (SecKeyRef)getPublicKeyRef { OSStatus sanityCheck = noErr; SecKeyRef publicKeyReference = NULL; if (publicKeyRef == NULL) { NSMutableDictionary *queryPublicKey = [[NSMutableDictionary alloc] init]; // Set the public key query dictionary. [queryPublicKey setObject:(id)kSecClassKey forKey:(id)kSecClass]; [queryPublicKey setObject:publicTag forKey:(id)kSecAttrApplicationTag]; [queryPublicKey setObject:(id)kSecAttrKeyTypeRSA forKey:(id)kSecAttrKeyType]; [queryPublicKey setObject:[NSNumber numberWithBool:YES] forKey:(id)kSecReturnRef]; // Get the key. sanityCheck = SecItemCopyMatching((CFDictionaryRef)queryPublicKey, (CFTypeRef *)&publicKeyReference); if (sanityCheck != noErr) { publicKeyReference = NULL; } [queryPublicKey release]; } else { publicKeyReference = publicKeyRef; } return publicKeyReference; } (SecKeyRef)getPrivateKeyRef { OSStatus resultCode = noErr; SecKeyRef privateKeyReference = NULL; if(privateKeyRef == NULL) { NSMutableDictionary * queryPrivateKey = [[NSMutableDictionary alloc] init]; // Set the private key query dictionary. [queryPrivateKey setObject:(id)kSecClassKey forKey:(id)kSecClass]; [queryPrivateKey setObject:privateTag forKey:(id)kSecAttrApplicationTag]; [queryPrivateKey setObject:(id)kSecAttrKeyTypeRSA forKey:(id)kSecAttrKeyType]; [queryPrivateKey setObject:[NSNumber numberWithBool:YES] forKey:(id)kSecReturnRef]; // Get the key. resultCode = SecItemCopyMatching((CFDictionaryRef)queryPrivateKey, (CFTypeRef *)&privateKeyReference); NSLog(@"getPrivateKey: result code: %d", resultCode); if(resultCode != noErr) { privateKeyReference = NULL; } [queryPrivateKey release]; } else { privateKeyReference = privateKeyRef; } return privateKeyReference; }

    Read the article

  • Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URL to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - starts parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you would like to cover even more links, but I've decided to omit other characters because I can't remember a link that would start with some other character 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces was present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: ${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending} I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like for my links to be replaced with shortenings as well. So when user copies a very long links (for instance if they would copy a link from google maps that usually generates long links I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. Does the replace string support notations like that so I can stil use a singe Regex.Replace() call?

    Read the article

  • news feed using .Net Dataservices / OData / Atom ?

    - by Stephan
    Let's say I have an web CMS type application, and an EDM model with an entity called 'article', and I need to offer the ability for client applications, to a read/query the articles (and other resources stored in our database) a straightforward syndication feed of these articles to end users (along the lines of a simple RSS feed) It seems to me that for the first task, and .net 4's dataservice would be perfect for the job. For the second case, I'm wondering (a) whether atom the right format to choose - I think it is - and (b) whether it's possible to achieve such a feed using the same ado.net OData service. I took a look at some of the examples out there and briefly set up a proof of concept: http://localhost/projectname/DataService.svc/Articles <?xml version="1.0" encoding="utf-8" standalone="yes"?> <feed xml:base="http://localhost/projectname/DataService.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">Articles</title> <id>http://localhost/projectname/DataService.svc/Articles</id> <updated>2010-05-21T09:41:22Z</updated> <link rel="self" title="Articles" href="Articles" /> <entry> <id>http://---------DataService.svc/Articles(1)</id> <title type="text"></title> <updated>2010-05-21T09:41:22Z</updated> <author> <name /> </author> <link rel="edit" title="Article" href="Articles(1)" /> <category term="Model1.Article" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:int_ContentID m:type="Edm.Int32">1</d:int_ContentID> <d:Titel>hello world</d:Titel> <d:Source>http://www.google.com</d:Source> </m:properties> </content> </entry> </feed> and noticed that, though the feed works and items are showing up, the title tag on the entry level is left blank. (as a result, when you check this feed in a feed reader, you will see no title). I searched msdn but haven't found a way to do that, but it should be possible. Stackoverflow itself uses an atom feed in that fashion, so it should be possible. Right? So I suppose my question is; Is there a way to make the ado.net dataservice Atom feed look like something suitable for your average news feed reader? - OR, am I using the wrong tool for the wrong purposes, and should I be looking elsewhere (.net syndication API's perhaps)?

    Read the article

  • Google Analytics V2 SDK for Android EasyTracker giving errors

    - by Prince
    I have followed the tutorial for the new Google Analytics V2 SDK for Android located here: https://developers.google.com/analytics/devguides/collection/android/v2/ Unfortunately whenever I go to run the application the reporting is not working and this is the messages that logcat gives me: 07-09 09:13:16.978: W/Ads(13933): No Google Analytics: Library Incompatible. 07-09 09:13:16.994: I/Ads(13933): To get test ads on this device, call adRequest.addTestDevice("2BB916E1BD6BE6407582A429D763EC71"); 07-09 09:13:17.018: I/Ads(13933): adRequestUrlHtml: <html><head><script src="http://media.admob.com/sdk-core-v40.js"></script><script>AFMA_getSdkConstants();AFMA_buildAdURL({"kw":[],"preqs":0,"session_id":"7925570029955749351","u_sd":2,"seq_num":"1","slotname":"a14fd91432961bd","u_w":360,"msid":"com.mysampleapp.sampleapp","js":"afma-sdk-a-v6.0.1","mv":"8013013.com.android.vending","isu":"2BB916E1BD6BE6407582A429D763EC71","cipa":1,"format":"320x50_mb","net":"wi","app_name":"1.android.com.mysampleapp.sampleapp","hl":"en","u_h":592,"carrier":"311480","ptime":0,"u_audio":3});</script></head><body></body></html> 07-09 09:13:17.041: W/ActivityManager(220): Unable to start service Intent { act=com.google.android.gms.analytics.service.START (has extras) }: not found 07-09 09:13:17.049: W/GAV2(13933): Thread[main,5,main]: Connection to service failed 1 07-09 09:13:17.057: W/GAV2(13933): Thread[main,5,main]: Need to call initializea() and be in fallback mode to start dispatch. 07-09 09:13:17.088: D/libEGL(13933): loaded /system/lib/egl/libGLES_android.so 07-09 09:13:17.096: D/libEGL(13933): loaded /vendor/lib/egl/libEGL_POWERVR_SGX540_120.so 07-09 09:13:17.096: D/libEGL(13933): loaded /vendor/lib/egl/libGLESv1_CM_POWERVR_SGX540_120.so 07-09 09:13:17.096: D/libEGL(13933): loaded /vendor/lib/egl/libGLESv2_POWERVR_SGX540_120.so Here is my code (I have redacted some of the code that had to do with httppost, etc.): package com.mysampleapp.sampleapp; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.json.JSONArray; import org.json.JSONObject; import com.google.analytics.tracking.android.EasyTracker; import android.app.Activity; import android.app.ProgressDialog; import android.content.DialogInterface; import android.content.DialogInterface.OnCancelListener; import android.content.Intent; import android.content.SharedPreferences; import android.os.AsyncTask; import android.os.Bundle; import android.preference.PreferenceManager; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.ImageView; import android.widget.TextView; public class viewRandom extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.viewrandom); uservote.setVisibility(View.GONE); new randomViewClass().execute(); } public void onStart() { super.onStart(); EasyTracker.getInstance().activityStart(this); } public void onStop() { super.onStop(); EasyTracker.getInstance().activityStop(this); } }

    Read the article

  • JSON posting, am i pushing JSON too far?

    - by joe90
    Im just wondering if I am pushing JSON too far? and if anyone has hit this before? I have a xml file: <?xml version="1.0" encoding="UTF-8"?> <customermodel:Customer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:customermodel="http://customermodel" xmlns:personal="http://customermodel/personal" id="1" age="1" name="Joe"> <bankAccounts xsi:type="customermodel:BankAccount" accountNo="10" bankName="HSBC" testBoolean="true" testDate="2006-10-23" testDateTime="2006-10-23T22:15:01+08:00" testDecimal="20.2" testTime="22:15:01+08:00"> <count>0</count> <bankAddressLine>HSBC</bankAddressLine> <bankAddressLine>London</bankAddressLine> <bankAddressLine>31 florence</bankAddressLine> <bankAddressLine>Swindon</bankAddressLine> </bankAccounts> </customermodel:Customer> Which contains elements and attributes.... Which when i convert to JSON gives me: {"customermodel:Customer":{"id":"1","name":"Joe","age":"1","xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance","bankAccounts":{"testDate":"2006-10-23","testDecimal":"20.2","count":"0","testDateTime":"2006-10-23T22:15:01+08:00","bankAddressLine":["HSBC","London","31 florence","Swindon"],"testBoolean":"true","bankName":"HSBC","accountNo":"10","xsi:type":"customermodel:BankAccount","testTime":"22:15:01+08:00"},"xmlns:personal":"http://customermodel/personal","xmlns:customermodel":"http://customermodel"}} So then i send this too the client.. which coverts to a js object (or whatever) edits some values (the elements) and then sends it back to the server. So i get the JSON string, and convert this back into XML: <customermodel:Customer> <id>1</id> <age>1</age> <name>Joe</name> <xmlns:xsi>http://www.w3.org/2001/XMLSchema-instance</xmlns:xsi> <bankAccounts> <testDate>2006-10-23</testDate> <testDecimal>20.2</testDecimal> <testDateTime>2006-10-23T22:15:01+08:00</testDateTime> <count>0</count> <bankAddressLine>HSBC</bankAddressLine> <bankAddressLine>London</bankAddressLine> <bankAddressLine>31 florence</bankAddressLine> <bankAddressLine>Swindon</bankAddressLine> <accountNo>10</accountNo> <bankName>HSBC</bankName> <testBoolean>true</testBoolean> <xsi:type>customermodel:BankAccount</xsi:type> <testTime>22:15:01+08:00</testTime> </bankAccounts> <xmlns:personal>http://customermodel/personal</xmlns:personal> <xmlns:customermodel>http://customermodel</xmlns:customermodel> </customermodel:Customer> And there is the problem, is doesn't seem to know the difference between elements/attributes so i can not check against a XSD to check this is now valid? Is there a solution to this? I cannot be the first to hit this problem?

    Read the article

  • MVC SiteMap - when different nodes point to same action SiteMap.CurrentNode does not map to the correct route

    - by awrigley
    Setup: I am using ASP.NET MVC 4, with mvcSiteMapProvider to manage my menus. I have a custom menu builder that evaluates whether a node is on the current branch (ie, if the SiteMap.CurrentNode is either the CurrentNode or the CurrentNode is nested under it). The code is included below, but essentially checks the url of each node and compares it with the url of the currentnode, up through the currentnodes "family tree". The CurrentBranch is used by my custom menu builder to add a class that highlights menu items on the CurrentBranch. The Problem: My custom menu works fine, but I have found that the mvcSiteMapProvider does not seem to evaluate the url of the CurrentNode in a consistent manner: When two nodes point to the same action and are distinguished only by a parameter of the action, SiteMap.CurrentNode does not seem to use the correct route (it ignores the distinguishing parameter and defaults to the first route that that maps to the action defined in the node). Example of the Problem: In an app I have Members. A Member has a MemberStatus field that can be "Unprocessed", "Active" or "Inactive". To change the MemberStatus, I have a ProcessMemberController in an Area called Admin. The processing is done using the Process action on the ProcessMemberController. My mvcSiteMap has two nodes that BOTH map to the Process action. The only difference between them is the alternate parameter (such are my client's domain semantics), that in one case has a value of "Processed" and in the other "Unprocessed": Nodes: <mvcSiteMapNode title="Process" area="Admin" controller="ProcessMembers" action="Process" alternate="Unprocessed" /> <mvcSiteMapNode title="Change Status" area="Admin" controller="ProcessMembers" action="Process" alternate="Processed" /> Routes: The corresponding routes to these two nodes are (again, the only thing that distinguishes them is the value of the alternate parameter): context.MapRoute( "Process_New_Members", "Admin/Unprocessed/Process/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Unprocessed", MemberId = UrlParameter.Optional } ); context.MapRoute( "Change_Status_Old_Members", "Admin/Members/Status/Change/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Processed", MemberId = UrlParameter.Optional } ); What works: The Html.ActionLink helper uses the routes and produces the urls I expect: @Html.ActionLink("Process", MVC.Admin.ProcessMembers.Process(item.MemberId, "Unprocessed") // Output (alternate="Unprocessed" and item.MemberId = 12): Admin/Unprocessed/Process/12 @Html.ActionLink("Status", MVC.Admin.ProcessMembers.Process(item.MemberId, "Processed") // Output (alternate="Processed" and item.MemberId = 23): Admin/Members/Status/Change/23 In both cases the output is correct and as I expect. What doesn't work: Let's say my request involves the second option, ie, /Admin/Members/Status/Change/47, corresponding to alternate = "Processed" and a MemberId of 47. Debugging my static CurrentBranch property (see below), I find that SiteMap.CurrentNode shows: PreviousSibling: null Provider: {MvcSiteMapProvider.DefaultSiteMapProvider} ReadOnly: false ResourceKey: "" Roles: Count = 0 RootNode: {Home} Title: "Process" Url: "/Admin/Unprocessed/Process/47" Ie, for a request url of /Admin/Members/Status/Change/47, SiteMap.CurrentNode.Url evaluates to /Admin/Unprocessed/Process/47. Ie, it is ignorning the alternate parameter and using the wrong route. CurrentBranch Static Property: /// <summary> /// ReadOnly. Gets the Branch of the Site Map that holds the SiteMap.CurrentNode /// </summary> public static List<SiteMapNode> CurrentBranch { get { List<SiteMapNode> currentBranch = null; if (currentBranch == null) { SiteMapNode cn = SiteMap.CurrentNode; SiteMapNode n = cn; List<SiteMapNode> ln = new List<SiteMapNode>(); if (cn != null) { while (n != null && n.Url != SiteMap.RootNode.Url) { // I don't need to check for n.ParentNode == null // because cn != null && n != SiteMap.RootNode ln.Add(n); n = n.ParentNode; } // the while loop excludes the root node, so add it here // I could add n, that should now be equal to SiteMap.RootNode, but this is clearer ln.Add(SiteMap.RootNode); // The nodes were added in reverse order, from the CurrentNode up, so reverse them. ln.Reverse(); } currentBranch = ln; } return currentBranch; } } The Question: What am I doing wrong? The routes are interpreted by Html.ActionLlink as I expect, but are not evaluated by SiteMap.CurrentNode as I expect. In other words, in evaluating my routes, SiteMap.CurrentNode ignores the distinguishing alternate parameter.

    Read the article

  • jqGrid multiplesearch - How do I add and/or column for each row?

    - by jimasp
    I have a jqGrid multiple search dialog as below: (Note the first empty td in each row). What I want is a search grid like this (below), where: The first td is filled in with "and/or" accordingly. The corresponding search filters are also built that way. The empty td is there by default, so I assume that this is a standard jqGrid feature that I can turn on, but I can't seem to find it in the docs. My code looks like this: <script type="text/javascript"> $(document).ready(function () { var lastSel; var pageSize = 10; // the initial pageSize $("#grid").jqGrid({ url: '/Ajax/GetJqGridActivities', editurl: '/Ajax/UpdateJqGridActivities', datatype: "json", colNames: [ 'Id', 'Description', 'Progress', 'Actual Start', 'Actual End', 'Status', 'Scheduled Start', 'Scheduled End' ], colModel: [ { name: 'Id', index: 'Id', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Description', index: 'Description', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Progress', index: 'Progress', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualStart', index: 'ActualStart', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualEnd', index: 'ActualEnd', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'Status', index: 'Status.Name', editable: true, searchoptions: { sopt: ['eq', 'ne']} }, { name: 'ScheduledStart', index: 'ScheduledStart', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ScheduledEnd', index: 'ScheduledEnd', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} } ], jsonReader: { root: 'rows', id: 'Id', repeatitems: false }, rowNum: pageSize, // this is the pageSize rowList: [pageSize, 50, 100], pager: '#pager', sortname: 'Id', autowidth: true, shrinkToFit: false, viewrecords: true, sortorder: "desc", caption: "JSON Example", onSelectRow: function (id) { if (id && id !== lastSel) { $('#grid').jqGrid('restoreRow', lastSel); $('#grid').jqGrid('editRow', id, true); lastSel = id; } } }); // change the options (called via searchoptions) var updateGroupOpText = function ($form) { $('select.opsel option[value="AND"]', $form[0]).text('and'); $('select.opsel option[value="OR"]', $form[0]).text('or'); $('select.opsel', $form[0]).trigger('change'); }; // $(window).bind('resize', function() { // ("#grid").setGridWidth($(window).width()); //}).trigger('resize'); // paging bar settings (inc. buttons) // and advanced search $("#grid").jqGrid('navGrid', '#pager', { edit: true, add: false, del: false }, // buttons {}, // edit option - these are ordered params! {}, // add options {}, // del options {closeOnEscape: true, multipleSearch: true, closeAfterSearch: true, onInitializeSearch: updateGroupOpText, afterRedraw: updateGroupOpText}, // search options {} ); // TODO: work out how to use global $.ajaxSetup instead $('#grid').ajaxError(function (e, jqxhr, settings, exception) { var str = "Ajax Error!\n\n"; if (jqxhr.status === 0) { str += 'Not connect.\n Verify Network.'; } else if (jqxhr.status == 404) { str += 'Requested page not found. [404]'; } else if (jqxhr.status == 500) { str += 'Internal Server Error [500].'; } else if (exception === 'parsererror') { str += 'Requested JSON parse failed.'; } else if (exception === 'timeout') { str += 'Time out error.'; } else if (exception === 'abort') { str += 'Ajax request aborted.'; } else { str += 'Uncaught Error.\n' + jqxhr.responseText; } alert(str); }); $("#grid").jqGrid('bindKeys'); $("#grid").jqGrid('inlineNav', "#grid"); }); </script> <table id="grid"/> <div id="pager"/>

    Read the article

  • Advanced Regex: Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URLs to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - start parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you'd like to cover even more links, but I've decided not to because I can't think of a link that would start with some obscure character) 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces were present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}" I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like my links to be replaced with shortenings as well. So when user copies a very long link (for instance if they would copy a link from google maps that usually generates long links) I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. I could as well append ellipsis at the end of at all possible (and make things even more perfect). Does Regex.Replace() method support replacement notations so that I can still use a single call? Something similar as string.Format() method does when you'd like to format values in string format (decimals, dates etc...).

    Read the article

  • Geolocation through Android's GPS Provider on a website?

    - by Corey Ogburn
    I'm trying to get the geolocation of the mobile device in a regular website, not a webview of an application or anything native like that. I'm getting a location, but it's highly inaccurate, the accuracy comes back as 3230 or some other outrageous number. I'm assuming that's in meters, either way it's not nearly accurate enough. By comparison, the same webpage on a laptop gets an accuracy of 30-40. My first thought was that it was using the Network Provider instead of the GPS Provider, telling me where I am based on tower location and reach. A little research later I found enableHighAccuracy and set it true in the options that I pass. After including that, I still notice no difference. Here's the test page's HTML/javascript: <html> <head> <script type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script> <script type="text/javascript"> function OnLoad() { $("#Status").text("Init"); if (navigator.geolocation) { $("#Status").text("Supports Geolocation"); navigator.geolocation.getCurrentPosition(HandleLocation, LocationError, { enableHighAccuracy: true }); $("#Status").text("Sent position request..."); } else { $("#Status").text("Doesn't support geolocation"); } } function HandleLocation(position) { $("#Status").text("Received response:"); $("#Position").text("(" + position.coords.latitude + ", " + position.coords.longitude + ") accuracy: " + position.coords.accuracy); var loc = new Microsoft.Maps.Location(position.coords.latitude, position.coords.longitude); GetMap(loc); } function LocationError(error) { switch(error.code) { case error.PERMISSION_DENIED: alert("Location not provided"); break; case error.POSITION_UNAVAILABLE: alert("Current location not available"); break; case error.TIMEOUT: alert("Timeout"); break; default: alert("unknown error"); break; } } function GetMap(loc) { var map = new Microsoft.Maps.Map(document.getElementById("mapDiv"), {credentials: "Aj59meaCR1e7rNgkfQy7j08Pd3mzfP1r04hGesGmLe2a3ZwZ3iGecwPX2SNPWq5a", center: loc, mapTypeId: Microsoft.Maps.MapTypeId.road, zoom: 15}); } </script> </head> <body onload="javascript:OnLoad()"> <div id="Status"></div> <div id="Position"></div><br/> <div id='mapDiv' style="position:relative; width:600px; height:400px;"></div> </body> </html> I'm testing this on a rooted MyTouch 3G running Cyanogen 6.1 stable, Android 2.2 and GPS is enabled. In case rooting was a problem, I have also had various friends and coworkers try the webpage on their non-rooted 2.0+ Android devices. Each phone had various effects on the accuracy, but none were better than 1000, I attribute this to the different carriers. I have not (but eventually will) tested with iPhone or other location-aware cell phones.

    Read the article

  • Can I create an xml that specifies element from 2 nested xsd's without using a prefixes?

    - by TweeZz
    I have 2 xsd's which are nested: DefaultSchema.xsd: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="DefaultSchema" targetNamespace="http://myNamespace.com/DefaultSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:complexType name="ZForm"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Part" minOccurs="0" maxOccurs="unbounded" type="Part"/> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> <xs:attribute name="Version" type="xs:int"/> </xs:complexType> <xs:complexType name="Part"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Label" type="Label" minOccurs="0"></xs:element> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="Label"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="Title" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:schema> ExportSchema.xsd: (this one kinda wraps 1 more element (ZForms) around the main element (ZForm) of the DefaultSchema) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="ExportSchema" targetNamespace="http://myNamespace.com/ExportSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:es="http://myNamespace.com/ExportSchema.xsd" > <xs:import namespace="http://myNamespace.com/DefaultSchema.xsd" schemaLocation="DefaultSchema.xsd"/> <xs:element name="ZForms" type="es:ZFormType"></xs:element> <xs:complexType name="ZFormType"> <xs:sequence> <xs:element name="ZForm" type="ZForm" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:schema> And then finally I have a generated xml: <?xml version="1.0" encoding="utf-8"?> <ZForms xmlns="http://myNamespace.com/ExportSchema.xsd"> <ZForm Version="1" Title="FormTitle"> <Part Title="PartTitle" > <Label Title="LabelTitle" /> </Part> </ZForm> </ZForms> Visual studio complains it doesn't know what 'Part' is. I was hoping I do not need to use xml namespace prefixes (..) to make this xml validate, since ExportSchema.xsd has a reference to the DefaultSChema.xsd. Is there any way to make that xml structure valid without explicitly specifying the DefaultSchema.xsd? Or is this a no go?

    Read the article

  • Weird behavior of fork() and execvp() in C

    - by ron
    After some remarks from my previous post , I made the following modifications : int main() { char errorStr[BUFF3]; while (1) { int i , errorFile; char *line = malloc(BUFFER); char *origLine = line; fgets(line, 128, stdin); // get a line from stdin // get complete diagnostics on the given string lineData info = runDiagnostics(line); char command[20]; sscanf(line, "%20s ", command); line = strchr(line, ' '); // here I remove the command from the line , the command is stored in "commmand" above printf("The Command is: %s\n", command); int currentCount = 0; // number of elements in the line int *argumentsCount = &currentCount; // pointer to that // get the elements separated char** arguments = separateLineGetElements(line,argumentsCount); printf("\nOutput after separating the given line from the user\n"); for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // here we call a method that would execute the commands pid_t pid ; if (-1 == (pid = fork())) { sprintf(errorStr,"fork: %s\n",strerror(errno)); write(errorFile,errorStr,strlen(errorStr + 1)); perror("fork"); exit(1); } else if (pid == 0) // fork was successful { printf("\nIn son process\n"); // if (execvp(arguments[0],arguments) < 0) // for the moment I ignore this line if (execvp(command,arguments) < 0) // execute the command { perror("execvp"); printf("ERROR: execvp failed\n"); exit(1); } } else // parent { int status = 0; pid = wait(&status); printf("Process %d returned with status %d.", pid, status); } // print each element of the line for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // free all the elements from the memory for (i = 0; i < *argumentsCount; i++) { free(arguments[i]); } free(arguments); free(origLine); } return 0; } When I enter in the Console : ls out.txt I get : The Command is: ls execvp: No such file or directory In son process ERROR: execvp failed Process 4047 returned with status 256.Argument 0 is: > Argument 1 is: out.txt So I guess that the son process is active , but from some reason the execvp fails . Why ? Regards REMARK : The ls command is just an example . I need to make this works with any given command . EDIT 1 : User input : ls > qq.out Program output : The Command is: ls Output after separating the given line from the user Argument 0 is: > Argument 1 is: qq.out In son process >: cannot access qq.out: No such file or directory Process 4885 returned with status 512.Argument 0 is: > Argument 1 is: qq.out

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • May 2011 Release of the Ajax Control Toolkit

    - by Stephen Walther
    I’m happy to announce that the Superexpert team has published the May 2011 release of the Ajax Control Toolkit at CodePlex. You can download the new release at the following URL: http://ajaxcontroltoolkit.codeplex.com/releases/view/65800 This release focused on improving the ModalPopup and AsyncFileUpload controls. Our team closed a total of 34 bugs related to the ModalPopup and AsyncFileUpload controls. Enhanced ModalPopup Control You can take advantage of the Ajax Control Toolkit ModalPopup control to easily create popup dialogs in your ASP.NET Web Forms applications. When the dialog appears, you cannot interact with any page content which appears behind the modal dialog. For example, the following page contains a standard ASP.NET Button and Panel. When you click the Button, the Panel appears as a popup dialog: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Simple.aspx.vb" Inherits="ACTSamples.Simple" %> <%@ Register TagPrefix="act" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple Modal Popup Sample</title> <style type="text/css"> html { background-color: blue; } #dialog { border: 2px solid black; width: 500px; background-color: White; } #dialogContents { padding: 10px; } .modalBackground { background-color:Gray; filter:alpha(opacity=70); opacity:0.7; } </style> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Panel ID="dialog" runat="server"> <div id="dialogContents"> Here are the contents of the dialog. <br /> <asp:Button ID="btnOK" Text="OK" runat="server" /> </div> </asp:Panel> <asp:Button ID="btnShow" Text="Open Dialog" runat="server" /> <act:ModalPopupExtender TargetControlID="btnShow" PopupControlID="dialog" OkControlID="btnOK" DropShadow="true" BackgroundCssClass="modalBackground" runat="server" /> </div> </form> </body> </html>     Notice that the page includes two controls from the Ajax Control Toolkit: the ToolkitScriptManager and the ModalPopupExtender control. Any page which uses any of the controls from the Ajax Control Toolkit must include a ToolkitScriptManager. The ModalPopupExtender is used to create the popup. The following properties are set: · TargetControlID – This is the ID of the Button or LinkButton control which causes the modal popup to be displayed. · PopupControlID – This is the ID of the Panel control which contains the content displayed in the modal popup. · OKControlID – This is the ID of a Button or LinkButton which causes the modal popup to close. · DropShadow – Displays a drop shadow behind the modal popup. · BackgroundCSSClass – The name of a Cascading Style Sheet class which is used to gray out the background of the page when the modal popup is displayed. The ModalPopup is completely cross-browser compatible. For example, the following screenshots show the same page displayed in Firefox 4, Internet Explorer 9, and Chrome 11: The ModalPopup control has lots of nice properties. For example, you can make the ModalPopup draggable. You also can programmatically hide and show a modal popup from either server-side or client-side code. To learn more about the properties of the ModalPopup control, see the following website: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/ModalPopup/ModalPopup.aspx Animated ModalPopup Control In the May 2011 release of the Ajax Control Toolkit, we enhanced the Modal Popup control so that it supports animations. We made this modification in response to a feature request posted at CodePlex which got 65 votes (plenty of people wanted this feature): http://ajaxcontroltoolkit.codeplex.com/workitem/6944 I want to thank Dani Kenan for posting a patch to this issue which we used as the basis for adding animation support for the modal popup. Thanks Dani! The enhanced ModalPopup in the May 2011 release of the Ajax Control Toolkit supports the following animations: OnShowing – Called before the modal popup is shown. OnShown – Called after the modal popup is shown. OnHiding – Called before the modal popup is hidden. OnHidden – Called after the modal popup is hidden. You can use these animations, for example, to fade-in a modal popup when it is displayed and fade-out the popup when it is hidden. Here’s the code: <act:ModalPopupExtender ID="ModalPopupExtender1" TargetControlID="btnShow" PopupControlID="dialog" OkControlID="btnOK" DropShadow="true" BackgroundCssClass="modalBackground" runat="server"> <Animations> <OnShown> <Fadein /> </OnShown> <OnHiding> <Fadeout /> </OnHiding> </Animations> </act:ModalPopupExtender>     So that you can experience the full joy of this animated modal popup, I recorded the following video: Of course, you can use any of the animations supported by the Ajax Control Toolkit with the modal popup. The animation reference is located here: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/Walkthrough/AnimationReference.aspx Fixes to the AsyncFileUpload In the May 2011 release, we also focused our energies on performing bug fixes for the AsyncFileUpload control. We fixed several major issues with the AsyncFileUpload including: It did not work in master pages It did not work when ClientIDMode=”Static” It did not work with Firefox 4 It did not work when multiple AsyncFileUploads were included in the same page It generated markup which was not HTML5 compatible The AsyncFileUpload control is a super useful control. It enables you to upload files in a form without performing a postback. Here’s some sample code which demonstrates how you can use the AsyncFileUpload: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Simple.aspx.vb" Inherits="ACTSamples.Simple1" %> <%@ Register TagPrefix="act" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Simple AsyncFileUpload</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> User Name: <br /> <asp:TextBox ID="txtUserName" runat="server" /> <asp:RequiredFieldValidator EnableClientScript="false" ErrorMessage="Required" ControlToValidate="txtUserName" runat="server" /> <br /><br /> Avatar: <act:AsyncFileUpload ID="async1" ThrobberID="throbber" UploadingBackColor="yellow" ErrorBackColor="red" CompleteBackColor="green" UploaderStyle="Modern" PersistFile="true" runat="server" /> <asp:Image ID="throbber" ImageUrl="uploading.gif" style="display:none" runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Submit" runat="server" /> </div> </form> </body> </html> And here’s the code-behind for the page above: Public Class Simple1 Inherits System.Web.UI.Page Private Sub btnSubmit_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSubmit.Click If Page.IsValid Then ' Get Form Fields Dim userName As String Dim file As Byte() userName = txtUserName.Text If async1.HasFile Then file = async1.FileBytes End If ' Save userName, file to database ' Redirect to success page Response.Redirect("SimpleDone.aspx") End If End Sub End Class   The form above contains an AsyncFileUpload which has values for the following properties: ThrobberID – The ID of an element in the page to display while a file is being uploaded. UploadingBackColor – The color to display in the upload field while a file is being uploaded. ErrorBackColor – The color to display in the upload field when there is an error uploading a file. CompleteBackColor – The color to display in the upload field when the upload is complete. UploaderStyle – The user interface style: Traditional or Modern. PersistFile – When true, the uploaded file is persisted in Session state. The last property PersistFile, causes the uploaded file to be stored in Session state. That way, if completing a form requires multiple postbacks, then the user needs to upload the file only once. For example, if there is a server validation error, then the user is not required to re-upload the file after fixing the validation issue. In the sample code above, this condition is simulated by disabling client-side validation for the RequiredFieldValidator control. The RequiredFieldValidator EnableClientScript property has the value false. The following video demonstrates how the AsyncFileUpload control works: You can learn more about the properties and methods of the AsyncFileUpload control by visiting the following page: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/AsyncFileUpload/AsyncFileUpload.aspx Conclusion In the May 2011 release of the Ajax Control Toolkit, we addressed over 30 bugs related to the ModalPopup and AsyncFileUpload controls. Furthermore, by building on code submitted by the community, we enhanced the ModalPopup control so that it supports animation (Thanks Dani). In our next sprint for the June release of the Ajax Control Toolkit, we plan to focus on the HTML Editor control. Subscribe to this blog to keep updated.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >