Search Results

Search found 11982 results on 480 pages for 'non greedy'.

Page 137/480 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • KB2667402 update fails to install with 0x80004005. Yet it's also showing as installed

    - by growse
    I've got 15 Windows 2008 R2 x64 servers that I manage with SCCM2012. I've noticed during my Windows updates reporting that there's two boxes that are showing as 'error' for total updates installed. Digging around, it looks like the update that is failing to install is KB2667402. The Software Centre on the server itself shows the following: The software change returned error code 0x80004005(-2147467259). So SCCM thinks it hasn't installed the update. However, if I go to the Programs and Features application and select 'Windows Updates', I can see an entry for KB2667402: If I try and uninstall this, I get an error: An error occurred. Not all of the updates were successfully uninstalled If I try and download the patch from Microsoft directly, I get the same error installing as displayed in Software Centre. The only odd thing about this setup that I can think would affect this is that I run the RDP service on a non-standard port. However, I do this across all the servers, so it seems odd that it would fail on just 2 out of 15. The tail of the WindowsUpdate.log file is below: 2012-06-26 15:33:53:184 3924 1608 COMAPI ------------- 2012-06-26 15:33:53:190 3924 1608 COMAPI -- START -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:190 3924 1608 COMAPI --------- 2012-06-26 15:33:53:190 3924 1608 COMAPI - Allow source prompts: No; Forced: No; Force quiet: Yes 2012-06-26 15:33:53:190 3924 1608 COMAPI - Updates in request: 1 2012-06-26 15:33:53:190 3924 1608 COMAPI - ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:53:199 860 1198 Agent ************* 2012-06-26 15:33:53:199 860 1198 Agent ** START ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:53:199 860 1198 Agent ********* 2012-06-26 15:33:53:199 860 1198 Agent * Updates to install = 1 2012-06-26 15:33:53:201 860 1198 Agent * Title = Security Update for Windows Server 2008 R2 x64 Edition (KB2667402) 2012-06-26 15:33:53:201 860 1198 Agent * UpdateId = {48859BE4-1331-4CD2-8E70-3B537180A0D0}.103 2012-06-26 15:33:53:201 860 1198 Agent * Bundles 1 updates: 2012-06-26 15:33:53:201 860 1198 Agent * {D854ECF1-99A7-4D67-B435-2D041BF79565}.103 2012-06-26 15:33:53:204 3924 1608 COMAPI - Updates to install = 1 2012-06-26 15:33:53:204 3924 1608 COMAPI <<-- SUBMITTED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:221 860 1198 Agent WARNING: failed to calculate prior restore point time with error 0x80070002; setting restore point 2012-06-26 15:33:53:222 860 1198 Agent WARNING: LoadLibrary failed for srclient.dll with hr:8007007e 2012-06-26 15:33:53:322 860 1198 DnldMgr Preparing update for install, updateId = {D854ECF1-99A7-4D67-B435-2D041BF79565}.103. 2012-06-26 15:33:53:325 5700 117c Misc =========== Logging initialized (build: 7.5.7601.17514, tz: +0100) =========== 2012-06-26 15:33:53:325 5700 117c Misc = Process: C:\Windows\system32\wuauclt.exe 2012-06-26 15:33:53:325 5700 117c Misc = Module: C:\Windows\system32\wuaueng.dll 2012-06-26 15:33:53:324 5700 117c Handler ::::::::::::: 2012-06-26 15:33:53:325 5700 117c Handler :: START :: Handler: CBS Install 2012-06-26 15:33:53:325 5700 117c Handler ::::::::: 2012-06-26 15:33:53:330 5700 117c Handler Starting install of CBS update D854ECF1-99A7-4D67-B435-2D041BF79565 2012-06-26 15:33:53:342 5700 117c Handler CBS package identity: Package_for_KB2667402~31bf3856ad364e35~amd64~~6.1.2.0 2012-06-26 15:33:53:366 5700 117c Handler Installing self-contained with source=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\windows6.1-kb2667402-v2-x64.cab, workingdir=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\inst 2012-06-26 15:33:56:270 5700 3b8 Handler FATAL: CBS called Error with 0x80004005, 2012-06-26 15:33:56:402 5700 117c Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x80004005 2012-06-26 15:33:56:405 5700 117c Handler ::::::::: 2012-06-26 15:33:56:406 5700 117c Handler :: END :: Handler: CBS Install 2012-06-26 15:33:56:406 5700 117c Handler ::::::::::::: 2012-06-26 15:33:56:433 860 1198 Agent ********* 2012-06-26 15:33:56:433 860 1198 Agent ** END ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:56:433 860 1198 Agent ************* 2012-06-26 15:33:56:433 860 d14 AU Can not perform non-interactive scan if AU is interactive-only 2012-06-26 15:33:56:450 3924 e40 COMAPI >>-- RESUMED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:450 3924 e40 COMAPI - Install call complete (succeeded = 0, succeeded with errors = 0, failed = 1, unaccounted = 0) 2012-06-26 15:33:56:450 3924 e40 COMAPI - Reboot required = No 2012-06-26 15:33:56:450 3924 e40 COMAPI - WARNING: Exit code = 0x00000000; Call error code = 0x80240022 2012-06-26 15:33:56:451 3924 e40 COMAPI --------- 2012-06-26 15:33:56:451 3924 e40 COMAPI -- END -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:451 3924 e40 COMAPI ------------- 2012-06-26 15:33:56:536 860 13a4 AU Triggering Offline detection (non-interactive) 2012-06-26 15:33:56:536 860 d14 AU ############# 2012-06-26 15:33:56:536 860 d14 AU ## START ## AU: Search for updates 2012-06-26 15:33:56:536 860 d14 AU ######### 2012-06-26 15:33:56:539 860 d14 AU <<## SUBMITTED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:56:539 860 1788 Agent ************* 2012-06-26 15:33:56:539 860 1788 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:56:539 860 1788 Agent ********* 2012-06-26 15:33:56:539 860 1788 Agent * Online = No; Ignore download priority = No 2012-06-26 15:33:56:539 860 1788 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2012-06-26 15:33:56:539 860 1788 Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:56:539 860 1788 Agent * Search Scope = {Machine} 2012-06-26 15:33:58:562 860 1788 Agent * Found 0 updates and 70 categories in search; evaluated appl. rules of 180 out of 1072 deployed entities 2012-06-26 15:33:58:565 860 1788 Agent ********* 2012-06-26 15:33:58:565 860 1788 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:58:565 860 1788 Agent ************* 2012-06-26 15:33:58:650 860 f2c AU >>## RESUMED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU # 0 updates detected 2012-06-26 15:33:58:650 860 f2c AU ######### 2012-06-26 15:33:58:650 860 f2c AU ## END ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU ############# 2012-06-26 15:33:58:650 860 f2c AU Featured notifications is disabled. 2012-06-26 15:33:58:651 860 f2c AU Successfully wrote event for AU health state:0 2012-06-26 15:33:58:652 860 f2c AU Successfully wrote event for AU health state:0

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Why does my ping command (Windows) results alternate between "timeout" and "network is not reachable"?

    - by Sopalajo de Arrierez
    My Windows is in Spanish, so I will have to paste console outputs in that language (I think that translating without knowing the exact terms used in english versions could give worse results than leaving it as it appears on screen). This is the issue: when pinging a non-existent IP from a WinXP-SP3 machine (clean Windows install, just formatted), I get sometimes a "Timeout" result, and sometimes a "network is not reachable" message. This is the result of: ping 192.168.210.1 Haciendo ping a 192.168.210.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 192.168.210.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms 192.168.210.1 does not exist on the network. DHCP client is enabled, and the computer gets assigned those network config by the router. My IP: 192.168.11.2 Netmask: 255.255.255.0 Gateway: 192.168.11.1 DNS: 80.58.0.33/194.224.52.36 This is the output from "route print command": =========================================================================== Rutas activas: Destino de red Máscara de red Puerta de acceso Interfaz Métrica 0.0.0.0 0.0.0.0 192.168.11.1 192.168.11.2 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.11.0 255.255.255.0 192.168.11.2 192.168.11.2 20 192.168.11.2 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.11.255 255.255.255.255 192.168.11.2 192.168.11.2 20 224.0.0.0 240.0.0.0 192.168.11.2 192.168.11.2 20 255.255.255.255 255.255.255.255 192.168.11.2 192.168.11.2 1 255.255.255.255 255.255.255.255 192.168.11.2 3 1 Puerta de enlace predeterminada: 192.168.11.1 =========================================================================== Rutas persistentes: ninguno The output of: ping 1.1.1.1 Haciendo ping a 1.1.1.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 1.1.1.1: Paquetes: enviados = 4, recibidos = 0, perdidos = 4 1.1.1.1 does not exist on the network. and the output of: ping 10.1.1.1 Haciendo ping a 10.1.1.1 con 32 bytes de datos: Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Estadísticas de ping para 10.1.1.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), 10.1.1.1 does not exist on the network. I can do some aproximate translation of what you demand if necessary. I have another computers in the same network (WinXP-SP3 and Win7-SP1), and they have, too, this problem. Gateway (Router): Buffalo WHR-HP-GN (official Buffalo firmware, not DD-WRT). I have some Linux (Debian/Kali) machine in my network, so I tested things on it: ping 192.168.210.1 PING 192.168.210.1 (192.168.210.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=1 Packet filtered From 80.58.67.86 icmp_seq=2 Packet filtered From 80.58.67.86 icmp_seq=3 Packet filtered From 80.58.67.86 icmp_seq=4 Packet filtered to the non-existing 1.1.1.1 : ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. ^C --- 1.1.1.1 ping statistics --- 153 packets transmitted, 0 received, 100% packet loss, time 153215ms (no response after waiting a few minutes). and the non-existing 10.1.1.1: ping 10.1.1.1 PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=20 Packet filtered From 80.58.67.86 icmp_seq=22 Packet filtered From 80.58.67.86 icmp_seq=23 Packet filtered From 80.58.67.86 icmp_seq=24 Packet filtered From 80.58.67.86 icmp_seq=25 Packet filtered What is going on here? I am posing this question mainly for learning purposes, but there is another reason: when all pings are returning "timeout", it creates an %ERRORLEVEL% value of 1, but if there is someone of "Network is not reachable" type, %ERRORLEVEL% goes to 0 (no error), and this could be inappropriate for a shell script (we can not use ping to detect, for example, if the network is down due to loss of contact with the gateway).

    Read the article

  • Determining the maximum stack depth

    - by Joa Ebert
    Imagine I have a stack-based toy language that comes with the operations Push, Pop, Jump and If. I have a program and its input is the toy language. For instance I get the sequence Push 1 Push 1 Pop Pop In that case the maximum stack would be 2. A more complicated example would use branches. Push 1 Push true If .success Pop Jump .continue .success: Push 1 Push 1 Pop Pop Pop .continue: In this case the maximum stack would be 3. However it is not possible to get the maximum stack by walking top to bottom as shown in this case since it would result in a stack-underflow error actually. CFGs to the rescue you can build a graph and walk every possible path of the basic blocks you have. However since the number of paths can grow quickly for n vertices you get (n-1)! possible paths. My current approach is to simplify the graph as much as possible and to have less possible paths. This works but I would consider it ugly. Is there a better (read: faster) way to attack this problem? I am fine if the algorithm produces a stack depth that is not optimal. If the correct stack size is m then my only constraint is that the result n is n = m. Is there maybe a greedy algorithm available that would produce a good result here?

    Read the article

  • Is there a perfect algorithm for chess?

    - by Overflown
    Dear Stack Overflow community, I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough. I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do. My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic. Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though. -- thanks, Allan Edit: Hmm... looks like I ruffled some feathers here. That's good. Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally). I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.

    Read the article

  • wso2 ESB: server configuration CRITICAL

    - by nuvio
    My Scenario: I have server_1 (192.168.10.1) with wso2-ESB and server_2 (192.168.10.2) with Glassfish-v3 + web services. Problem: I am trying to create a proxy in ESB using the java Web Services, but the created proxy does not respond properly. The log says: Unable to sendViaPost for http or https does not change the result. I think I should configure the axis2.xml but I am having trouble, and don't know what to do. What is the configuration for my scenario? Please help me! EDIT: To be clear, I can directly consume the WebService in the Glassfish server, it works normal, both port and url are accessible. Only when I create a "Pass through Proxy" in the ESB, it does not work. I don't think is matter of Proxy configuration...I never had problems while deployed locally, problems started once I have uploaded the ESB to a remote server. I really would need someone to point me what is the correct procedure when installing the ESB on a remote host: configuration of axis2.xml and carbon.xml, ports, transport receivers etc... P.S. I had a look at the official (wso2 esb and carbon) guides with no luck, but I am missing something... Endpoint of Java Web Service: http://192.168.10.2:8080/HelloWorld/Hello?wsdl ESB Proxy Enpoint: http://192.168.10.1:8280/services/HelloProxy The following is my axis2.xml configuration, please check it: <transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener"> <parameter name="port" locked="false">8280</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https//192.168.10.1:8280</parameter> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> </transportReceiver> <!-- the non blocking https transport based on HttpCore + SSL-NIO extensions --> <transportReceiver name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener"> <parameter name="port" locked="false">8243</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https://192.168.10.1:8243</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <parameter name="keystore" locked="false"> <KeyStore> <Location>repository/resources/security/wso2carbon.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> <KeyPassword>wso2carbon</KeyPassword> </KeyStore> </parameter> <parameter name="truststore" locked="false"> <TrustStore> <Location>repository/resources/security/client-truststore.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> </TrustStore> </parameter> <!--<parameter name="SSLVerifyClient">require</parameter> supports optional|require or defaults to none --> </transportReceiver>

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • Const references when dereferencing iterator on set, starting from Visual Studio 2010

    - by Patrick
    Starting from Visual Studio 2010, iterating over a set seems to return an iterator that dereferences the data as 'const data' instead of non-const. The following code is an example of something that does compile on Visual Studio 2005, but not on 2010 (this is an artificial example, but clearly illustrates the problem we found on our own code). In this example, I have a class that stores a position together with a temperature. I define comparison operators (not all them, just enough to illustrate the problem) that only use the position, not the temperature. The point is that for me two instances are identical if the position is identical; I don't care about the temperature. #include <set> class DataPoint { public: DataPoint (int x, int y) : m_x(x), m_y(y), m_temperature(0) {} void setTemperature(double t) {m_temperature = t;} bool operator<(const DataPoint& rhs) const { if (m_x==rhs.m_x) return m_y<rhs.m_y; else return m_x<rhs.m_x; } bool operator==(const DataPoint& rhs) const { if (m_x!=rhs.m_x) return false; if (m_y!=rhs.m_y) return false; return true; } private: int m_x; int m_y; double m_temperature; }; typedef std::set<DataPoint> DataPointCollection; void main(void) { DataPointCollection points; points.insert (DataPoint(1,1)); points.insert (DataPoint(1,1)); points.insert (DataPoint(1,2)); points.insert (DataPoint(1,3)); points.insert (DataPoint(1,1)); for (DataPointCollection::iterator it=points.begin();it!=points.end();++it) { DataPoint &point = *it; point.setTemperature(10); } } In the main routine I have a set to which I add some points. To check the correctness of the comparison operator, I add data points with the same position multiple times. When writing the contents of the set, I can clearly see there are only 3 points in the set. The for-loop loops over the set, and sets the temperature. Logically this is allowed, since the temperature is not used in the comparison operators. This code compiles correctly in Visual Studio 2005, but gives compilation errors in Visual Studio 2010 on the following line (in the for-loop): DataPoint &point = *it; The error given is that it can't assign a "const DataPoint" to a [non-const] "DataPoint &". It seems that you have no decent (= non-dirty) way of writing this code in VS2010 if you have a comparison operator that only compares parts of the data members. Possible solutions are: Adding a const-cast to the line where it gives an error Making temperature mutable and making setTemperature a const method But to me both solutions seem rather 'dirty'. It looks like the C++ standards committee overlooked this situation. Or not? What are clean solutions to solve this problem? Did some of you encounter this same problem and how did you solve it? Patrick

    Read the article

  • Extract a pattern from the output of curl

    - by allentown
    I would like to use curl, on the command line, to grab a url, pipe it to a pattern, and return a list of urls that match that pattern. I am running into problems with greedy aspects of the pattern, and can not seem to get past it. Any help on this would be apprecaited. curl http://www.reddit.com/r/pics/ | grep -ioE "http://imgur\.com/.+(jpg|jpeg|gif|png)" So, grab the data from the url, which returns a mess of html, which may need some linebreaks somehow replaced in, onless the regex can return more than one pattern in a single line. The patter is pretty simple, any string that matches... starts with http://imgur.com/ has A-Z a-z 0-9 (maybe some others) and is so far, 5 chars long, 8 should cover it forever if I wanted to limit that aspect of the patter, which I don't ends in a .grraphic_file_format_extention (jpg, jpeg, gif, png) Thats about it, at that url, with default settings, I should generally get back a good set of images. I would not be objectionable to using the RSS feel url for the same page, it may be easier to parse actually. Thanks everyone!

    Read the article

  • How to eliminate NULL fields in TSQL

    - by salvationishere
    I am developing a TSQL query in SSMS 2008 R2. I am trying to develop this query to identify one record / client. Because some of these values are NULL, I am currently doing LEFT JOINS on most of the tables. But the problem with the LEFT JOINs is that now I get 1 record for some clients. But if I change this to INNER JOINs then some clients are excluded entirely because they have NULL values for these columns. How do I limit the query result to just one record / client regardless of NULL values? And if there are non-NULL values then I want it to choose the record with non-NULL values. Here is some of my current output: group_profile_id profile_name license_number is_accepting is_accepting_placement managing_office region vendor_name vendor_id applicant_type Office Address status_description Cert Date2 race ethnicity_desc religion 9CD932F1-6BE1-4F80-AB81-0CE32C565BCF Atreides Foster Home 1 Atreides1 1 Yes Manchester, NH Gulf Atlantic Atreides1 00000007 Treatment Foster Home 4042 Arrakis Avenue, Springfield, VT 05156 Open/Re-opened 2011-06-01 00:00:00.000 NULL NULL NULL DCE354D5-A7CC-409F-B5A3-89BF664B7718 Averitte, Leon and Sandra 00000044 1 Yes Birmingham, AL Gulf Atlantic AL Averitte, Leon and Sandra 00000044 Treatment Foster Home 3816 5th Avenue, Bessemer, AL 35020, (205)482-4307 Open/Re-opened 2011-08-05 00:00:00.000 NULL NULL NULL DCE354D5-A7CC-409F-B5A3-89BF664B7718 Averitte, Leon and Sandra 00000044 1 Yes Birmingham, AL Gulf Atlantic AL Averitte, Leon and Sandra 00000044 Treatment Foster Home 3816 5th Avenue, Bessemer, AL 35020, (205)482-4307 Open/Re-opened 2011-08-05 00:00:00.000 Caucasian/White Non Hispanic NULL AD02A43C-6F38-4F35-8C9E-E12422690BFB Bass, Matthew and Sarah 00000076 1 Yes Jacks on, MS Central Gulf Coast MS Bass, Matthew and Sarah 00000076 Treatment Foster Home 506 Eagelwood Drive, Florence, MS 39073, (601)665-7169 Open/Re-opened 2011-04-01 00:00:00.000 NULL NULL NULL AD02A43C-6F38-4F35-8C9E-E12422690BFB Bass, Matthew and Sarah 00000076 1 Yes Jackson, MS Central Gulf Coast MS Bass, Matthew and Sarah 00000076 Treatment Foster Home 506 Eagelwood Drive, Florence, MS 39073, (601)665-7169 Open/Re-opened 2011-04-01 00:00:00.000 Caucasian/White NULL Baptist You can see that both Averitte and Bass profile names have one record with NULL race, ethnicity, religion. How do I eliminate these rows (rows 2 and 4)? Here is my query currently: select distinct gp.group_profile_id, gp.profile_name, gp.license_number, gp.is_accepting, case when gp.is_accepting = 1 then 'Yes' when gp.is_accepting = 0 then 'No ' end as is_accepting_placement, mo.profile_name as managing_office, regions.[region_description] as region, pv.vendor_name, pv.id as vendor_id, at.description as applicant_type, dbo.GetGroupAddress(gp.group_profile_id, null, 0) as [Office Address], gsv.status_description, ri.[description] as race, ethnicity.description as ethnicity_desc, religion.description as religion from group_profile gp With (NoLock) --Office Information inner join group_profile_type gpt With (NoLock) on gp.group_profile_type_id = gpt.group_profile_type_id and gpt.type_code = 'FOSTERHOME' and gp.agency_id = @agency_id and gp.is_deleted = 0 inner join group_profile mo With (NoLock) on gp.managing_office_id = mo.group_profile_id left outer join payor_vendor pv With (NoLock) on gp.payor_vendor_id = pv.payor_vendor_id left outer join applicant_type at With (NoLock) on gp.applicant_type_id = at.applicant_type_id and at.is_foster_home = 1 inner join group_status_view gsv With (NoLock) on gp.group_profile_id = gsv.group_profile_id and gsv.status_value = 'OPEN' and gsv.effective_date = (Select max(b.effective_date) from group_status_view b With (NoLock) where gp.group_profile_id = b.group_profile_id) left outer join regions With (NoLock) on isnull(mo.regions_id, gp.regions_id) = regions.regions_id left join enrollment en on en.group_profile_id = gp.group_profile_id join event_log el on el.event_log_id = en.event_log_id left join people client on client.people_id = el.people_id left join race With (NoLock) on el.people_id = race.people_id left join group_profile_race gpr with (nolock) on gpr.race_info_id = race.race_info_id left join race_info ri with (nolock) on ri.race_info_id = gpr.race_info_id left join ethnicity With(NoLock) On client.ethnicity = ethnicity.ethnicity_id left join religion on client.religion = religion.religion_id

    Read the article

  • jQuery fadeIn and fadeOut buggy on hover of image

    - by user186319
    Hi All, I have four images on a page that on hover, needs to replace the main text with relevant text pertaining to that image. It's working but buggy. If I roll over slowly and roll off slowly, I get the desired effect. When I rollover quickly both div's content show. Here is a thinned out version of what I need to be able to do. <img src="btn-open.gif" class="btn" /> <div class="mainText"> <h1>Main text</h1> <p>Morbi mollis auctor magna, eu sodales mi posuere elementum. Donec lacus lorem, vestibulum sed luctus ac, tincidunt sit amet eros. Nullam tristique lectus lobortis nibh pharetra placerat. Aliquam quis tellus mauris. Quisque eu convallis elit. Sed vitae libero est. Suspendisse laoreet magna magna, vitae malesuada diam. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce eros ipsum, interdum et volutpat sed, commodo aliquam odio. Maecenas auctor condimentum mi. Maecenas ante eros, tristique nec viverra sed, molestie sit amet nulla. Suspendisse vitae turpis ac felis rutrum interdum.</p> </div> <div class="replacementText"> <h1>Replacement text</h1> <p>Nulla ac magna nec quam cursus mollis eget a nulla! Vestibulum quis nibh ipsum, ut vehicula leo. Etiam ac felis suscipit mi semper vehicula. Praesent est mi, suscipit sit amet bibendum at, porta quis elit. Integer lectus est, consequat non sodales ac, pharetra sit amet tellus. Suspendisse porttitor massa a dolor suscipit sed ullamcorper ipsum vehicula. In malesuada augue sit amet ante volutpat euismod. Ut vel felis sed enim placerat ultricies. Aliquam erat volutpat. Vivamus rutrum; ante vitae euismod accumsan, felis odio lacinia magna, eu viverra nisl metus non ligula? In metus nisi, viverra vel scelerisque non, ullamcorper sed arcu? In hac habitasse platea dictumst. Donec laoreet dapibus quam vitae pulvinar. Morbi ut purus nisl. Nulla eu velit ipsum; vel mattis magna. Aenean sodales faucibus dapibus.</p> </div> $(document).ready(function() { $(".btn").hover( function() { $(this).css({ cursor: "pointer" }); $(".mainText").hide(); $(".replacementText").slideDown("slow"); }, function() { $(".replacementText").hide(); $(".mainText").slideDown("slow"); }); });

    Read the article

  • ASP.NET MVC Map String Url To A Route Value Object

    - by mwgriffiths
    I am creating a modular ASP.NET MVC application using areas. In short, I have created a greedy route that captures all routes beginning with {application}/{*catchAll}. Here is the action: // get /application/index public ActionResult Index(string application, object catchAll) { // forward to partial request to return partial view ViewData["partialRequest"] = new PartialRequest(catchAll); // this gets called in the view page and uses a partial request class to return a partial view } Example: The Url "/Application/Accounts/LogOn" will then cause the Index action to pass "/Accounts/LogOn" into the PartialRequest, but as a string value. // partial request constructor public PartialRequest(object routeValues) { RouteValueDictionary = new RouteValueDictionary(routeValues); } In this case, the route value dictionary will not return any values for the routeData, whereas if I specify a route in the Index Action: ViewData["partialRequest"] = new PartialRequest(new { controller = "accounts", action = "logon" }); It works, and the routeData values contains a "controller" key and an "action" key; whereas before, the keys are empty, and therefore the rest of the class wont work. So my question is, how can I convert the "/Accounts/LogOn" in the catchAll to "new { controller = "accounts", action = "logon" }"?? If this is not clear, I will explain more! :) Matt This is the "closest" I have got, but it obviously wont work for complex routes: // split values into array var routeParts = catchAll.ToString().Split(new char[] { '/' }, StringSplitOptions.RemoveEmptyEntries); // feels like a hack catchAll = new { controller = routeParts[0], action = routeParts[1] };

    Read the article

  • How can you trigger the viewWillAppear of a UITableView AFTER its UINavigationController?

    - by Troy Sartain
    I have a situation where I use a tab bar set up but with nav bar controllers on a couple tabs. Those tabs have table views on them. Everything works great, I can pick a tab and get a different table in a nav bar structure. The other tabs are non-nav controllers. Fine. I want to use the same table view controller and even the same detail screen since they are essentially the same format. I have two-dimensional arrays and a couple of vars tracking which tab and which table row so when I get to the detail it's all good. Now to the problem. It all seems to work just fine until I return to a tab that has already been visited. At that point, I do indeed get a viewWillAppear for both the view controller of that specific tab and the table view controller. However, I get the table view one first! It doesn't know which tab was tapped on; the other one does but that's too late to dynamically change the table! Any suggestions? Am I being too greedy about code duplication? I mean I could just make separate controllers for for each table view and then separate detail view controllers but I thought I had a good solution.

    Read the article

  • Extract a specific string from a curl'd result

    - by allentown
    Given this curl command: curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate" * Spelling is intentionally incorrect. I want to grab the suggestion as my result. I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file. The result should be: 'instantiate' I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after. Here is the basic html that is returned: <span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&amp;ie=UTF-8&amp;&amp;sa=X&amp;ei=VEMUTMDqGoOINraK3NwL&amp;ved=0CB0QBSgA&amp;q=instantiate&amp;spell=1"class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp;<span class=std>Top 2 results shown</span> So perhaps from/to of the string below, which I hope is unique enough to cover all my bases. class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp; I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module. Any suggestions, thank you?

    Read the article

  • Generic ASP.NET MVC Route Conflict

    - by Donn Felker
    I'm working on a Legacy ASP.NET system. I say legacy because there are NO tests around 90% of the system. I'm trying to fix the routes in this project and I'm running into a issue I wish to solve with generic routes. I have the following routes: routes.MapRoute( "DefaultWithPdn", "{controller}/{action}/{pdn}", new { controller = "", action = "Index", pdn = "" }, null ); routes.MapRoute( "DefaultWithClientId", "{controller}/{action}/{clientId}", new { controller = "", action = "index", clientid = "" }, null ); The problem is that the first route is catching all of the traffic for what I need to be routed to the second route. The route is generic (no controller is defined in the constraint in either route definition) because multiple controllers throughout the entire app share this same premise (sometimes we need a "pdn" sometimes we need a "clientId"). How can I map these generic routes so that they go to the proper controller and action, yet not have one be too greedy? Or can I at all? Are these routes too generic (which is what I'm starting to believe is the case). My only option at this point (AFAIK) is one of the following: In the contraints, apply a regex to match the action values like: (foo|bar|biz|bang) and the same for the controller: (home|customer|products) for each controller. However, this has a problem in the fact that I may need to do this: ~/Foo/Home/123 // Should map to "DefaultwithPdn" ~/Foo/Home/abc // Should map to "DefaultWithClientId" Which means that if the Foo Controller has an action that takes a pdn and another action that takes a clientId (which happens all the time in this app), the wrong route is chosen. To hardcode these contstraints into each possible controller/action combo seems like a lot of duplication to me and I have the feeling I've been looking at the problem for too long so I need another pair of eyes to help out. Can I have generic routes to handle this scenario? Or do I need to have custom routes for each controller with constraints applied to the actions on those routes? Thanks

    Read the article

  • jquery draggable/droppable issue

    - by Scarface
    Hey guys I have a drag and drop function that does not fully work. What I want to do is be able to drag and drop a picture into a div and have that picture pop up in the div. I have a list of 2 pictures right now, so I have the following function echoed for each picture. Both pictures are draggable and droppable, but the first one is the only one that appears in the div, regardless of which picture gets dragged in. I am not sure what is wrong because the jquery function seems to be unique to each picture. If any one has any suggestions I would appreciate it greatly. while ($row = mysql_fetch_assoc($query)) { $image=htmlentities($row['image']); //image name $uniqid=uniqid(); //unique id for jquery functions $dirname = "usercontent/$user/images/$image"; echo "<img id=\"$uniqid\" src=\"$dirname\" width=\"75\" height=\"75\"/>"; echo "<script> $(function() { $(\"#$uniqid\").draggable({ scroll: true, scrollSensitivity: 10, scrollSpeed: 10, revert: true, helper: 'clone', cursorAt: { cursor: 'move', top: 27, left: 27 } }); $(\"#droppable2, #droppable-background , #droppable2-innerer\").droppable({ greedy: true, activeClass: 'ui-state-hover', hoverClass: 'ui-state-active', drop: function(event, ui) { $(this).addClass('ui-state-highlight').find('> p').html('Dropped!'); $('#droppable-background').css(\"background-image\",\"url($dirname)\"); } }); }); </script>"; }

    Read the article

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • PHP preg_match: a pattern which satisfies all MySQL field names (including 'table.field' formations)

    - by gsquare567
    i need a pattern which satisfies mysql field names, but also with the option of having a table name before it examples: mytable.myfield myfield my4732894__7289FiEld here's what i tried: $pattern = "/^[a-zA-Z0-9_]*?[\.[a-zA-Z0-9_]]?$/"; this worked for what i needed before, which was just the field name: $pattern = "/^[a-zA-Z0-9_]*$/"; any ideas why my addition isnt working? maybe i'm making up regex, so i'll explain what i added... the first '?' is to say that it isn't greedy, ie. it will stop if the next part, namely "[.[a-zA-Z0-9_]]?" is satisfied. now, that second part is just the same as the first except it is optional (hence the '?' at the end) and it starts with a period (hence the '[.' and ']' wrapping my old clause. and obviously, the "^" and "$" rep the beginning and end of the string so... any ideas? (also, i'm a tad confused as to why i need to put in those "/"s in the begining/end anyways, so if you could tell me why it's required, that'd be awesome) thanks a lot! (and thanks for reading this all if you actually did... it's quite a ramble)

    Read the article

  • Regex to remove all but file name from links

    - by Moasely
    Hi, I am trying to write a regexp that removes file paths from links and images. href="path/path/file" to href="file" href="/file" to href="file" src="/path/file" to src="file" and so on... I thought that I had it working, but it messes up if there are two paths in the string it is working on. I think my expression is too greedy. It finds the very last file in the entire string. This is my code that shows the expression messing up on the test input: <script type="text/javascript" src="/javascripts/jquery.js"></script> <script type="text/javascript"> $(document).ready(function(){ var s = '<a href="one/keepthis"><img src="/one/two/keep.this"></a>'; var t = s.replace(/(src|href)=("|').*\/(.*)\2/gi,"$1=$2$3$2"); alert(t); }); </script> It gives the output: <a href="keep.this"></a> The correct output should be: <a href="keepthis"><img src="keep.this"></a> Thanks for any tips!

    Read the article

  • Change columns order in bootstrap 3

    - by TNK
    I am trying to change bootstrap columns order in desktop and mobile screen. <div class="container"> <div class="row"> <section class="content-secondary col-sm-4"> <h4>Don't Miss!</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Check it out</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Finally</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> </section> <section class="content-primary col-sm-8"> <div class="main-content" ........... ........... </div> </section> </div><!-- /.row --> </div><!-- /.container --> View Port = SM columns order should be : |content- secondary | content-primary | View Port < SM columns order should be : | content-primary | |content- secondary | I tried it something like this using 'pulland 'push clases. <div class="row"> <section class="content-secondary col-sm-4 col-sm-push-4"> Content ...... </section> <section class="content-primary col-sm-8 col-sm-pull-8"> Content ...... </section> </div> But this is not working for me. Hope someone will help me out. Thank you.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >