Search Results

Search found 13774 results on 551 pages for 'apache modules'.

Page 372/551 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Lucene Fuzzy Match on Phrase instead of Single Word

    - by Koobz
    I'm trying to do a fuzzy match on the Phrase "Grand Prarie" (deliberately misspelled) using Apache Lucene. Part of my issue is that the ~ operator only does fuzzy matches on single word terms and behaves as a proximity match for phrases. Is there a way to do a fuzzy match on a phrase with lucene?

    Read the article

  • Maven webapp with maven-eclipse-plugin doesn't generate <dependent-module>

    - by codevourer
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? Enviroment Eclipse 3.5 JEE Galileo Apache Maven 2.2.1 (r801777; 2009-08-06 21:16:01+0200) Java version: 1.6.0_14 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>someproject-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>someproject-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin> The generated files After executing the eclipse:eclipse goal, the dependent-module is not listed in my generated .settings/org.eclipse.wst.common.component, so on server booting i miss the depdencies. This is what i get: <?xml version="1.0" encoding="UTF-8"?> <project-modules id="moduleCoreId" project-version="1.5.0"> <wb-module deploy-name="someproject-core"> <wb-resource deploy-path="/" source-path="src/main/java"/> <wb-resource deploy-path="/" source-path="src/main/webapp"/> <wb-resource deploy-path="/" source-path="src/main/resources"/> </wb-module> </project-modules> Update for upcoming readers The problem here was the deviant packaging-type, if u use maven-eclipse-plugin please validate the use of <packaging>war</packaging> or ear. The following problems are marked of the situations that i have two build-lifecycles in one maven pom.

    Read the article

  • SAN newbie: What kind of fiber cables do I need to connect to the front end ports on my EMC CX300

    - by red888
    I have an old CX300 I'm messing with and I cant determine what kind of HBA\fiber cables I need to connect a server to the front end ports on the DPE2. The hardware reference doesn't give me any details and being new to FC-SAN all of the options are pretty overwhelming. I don't know if the ports are SC or LC or if it requires singlemode or multimode. Googling hasn't turned up anything for me either. All I know is that it supports 2GB Fiber. Here is a pic of one the modules. You can clearly see the front-end ports (FE)

    Read the article

  • Soap Delphi Client end with a timeout for a 1MB call

    - by Cédric Girard
    Hi, we are developing a SOAP webservice (Apache/PHP). All run well for small size calls, but with a 1Mb soap call (the HTTPS call size is 1MB) our Delphi Soap client stop with a timeout on all PC but one, and our PHP clients run well with a default_socket_timeout=300, but stop with a "Error Fetching http headers" with default_socket_timeout=60. How can we change the timeout for Delphi? In fact this timeout seem to be in a Windows XP network library (wininet.dll called by soaphttptrans.pas) Thanks Cédric

    Read the article

  • Oracle JDBC connection exception in Solaris but not Windows?

    - by lupefiasco
    I have some Java code that connects to an Oracle database using DriverManager.getConnection(). It works just fine on my Windows XP machine. However, when running the same code on a Solaris machine, I get the following exception. Both machines can reach the database machine on the network. I have included the Oracle trace logs. Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: ConfigurationUtils.locate(): base is /users/theUser/ADCompare, name is props.txt Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: Loading configuration from the path /users/theUser/ADCompare/props.txt Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINE: OracleDriver.connect(url=jdbc:oracle:thin:@//theServer:1521/theService, info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: OracleDriver.connect() walletLocation:(null) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url=jdbc:oracle:thin:@//theServer:1521/theService) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: sub_sub_index=12, end=46, next_colon_index=16, user=17, slash=18, at_sign=17 Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: user=theUser, password=******, database=//theServer:1521/theService, protocol=thin, prefetch=null, batch=null, accumulate batch result =true, remarks=null, synonyms=null Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINE: PhysicalConnection.PhysicalConnection(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", p="******", db="//theServer:1521/theService", info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINEST: PhysicalConnection.PhysicalConnection() : connectionProperties={user=theUser, password=******, protocol=thin} Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", access) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur, us):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection needLine FINE: PhysicalConnection.needLine()--no return java.lang.ArrayIndexOutOfBoundsException: 31 at oracle.net.nl.NVTokens.parseTokens(Unknown Source) at oracle.net.nl.NVFactory.createNVPair(Unknown Source) at oracle.net.nl.NLParamParser.addNLPListElement(Unknown Source) at oracle.net.nl.NLParamParser.initializeNlpa(Unknown Source) at oracle.net.nl.NLParamParser.<init>(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.loadFile(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.checkAndReload(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.resolve(Unknown Source) at oracle.net.resolver.NameResolver.resolveName(Unknown Source) at oracle.net.resolver.AddrResolution.resolveAndExecute(Unknown Source) at oracle.net.ns.NSProtocol.establishConnection(Unknown Source) at oracle.net.ns.NSProtocol.connect(Unknown Source) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1037) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:282) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:468) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:839) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) The above exception is also thrown if I use OracleDataSource instead of the generic DriverManager.getConnection(). Any ideas on why the behavior is different in the different environments?

    Read the article

  • How can I run Devel::Cover under mod_perl2?

    - by codeholic
    Unfortunately, Devel::Cover does not yet work with threads. It doesn't work with prefork either. Being use'd in startup.pl, Devel::Cover issues Not a CODE reference. END failed--call queue aborted. perl 5.8.9, Apache 2.2.13. My OS is FreeBSD, if that matters. The same problem is reported for win32.

    Read the article

  • No compatible network card

    - by sbintcliffe
    Motherboard: Asus K8N-E Deluxe with onboard NIC (nVidia nForce) Secondary NIC: I've tried using a standard NIC (Device Manager displays this as D-Link DFE-538TX 10/100, but under manufacturer in the General tab of the properties in Windows it states Realtek Semiconductor Corp.) I have downloaded ESXi 4.0.0 build 208167 and cooked to disc. I've booted from it, the .TGZ modules load from the yellow and grey screen, the progress bar reaches to about 60% and like a second later the screen changes and I have the following information on screen; "No compatible network adapter found. Please consult the HCG." I've checked the HCG and found that my motherboard is listed. I also get the same message with the secondary NIC. Any ideas please?

    Read the article

  • Configure Notepad++ DBGP plugin and XDebug for PHP

    - by Wasim
    I followed these steps: download x-debug*.dll to D:\Program Files\webserver\php\ext\php_xdebug.dll modify php.ini and insert following zend_extension_ts="D:\Program Files\webserver\php\ext\php_xdebug.dll" xdebug.remote_enable=1 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.idekey=default xdebug.remote_autostart=1 restart apache and XDebug is successfully installed. DBGP is installed successfully and configured with 127.0.0.1 but XDebugger is still not connecting/working with notepad++.

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by CQM
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • JPA/EJB3 Relationship

    - by sdoca
    I have been reading about JPA and EJB3 and would like to confirm that my understanding of their relationship is correct. Here's what I think I know... JPA is a specification that has been implemented by a number of vendors including: JBoss/Hibernate Oracle/TopLink Essentials (now EclipseLink) Apache/OpenJPA EJB3 is a specification that is implemented in Application Servers including: Glassfish JBoss Is this correct?

    Read the article

  • Opscode Chef nginx compile from source issue reports successful run but does nothing

    - by v_abhi_v
    I am trying to install nginx from source in Opscode Chef and its bit weird, it runs complaining nothing but does not install it either. This is how my role attributes look like looks like "nginx":{ "default_site_enabled":false, "version":"1.2.6", "init_style":"init", "install_method":"source", "configure_flags":[ "--without-http_access_module", "--without-http_auth_basic_module", "--without-http_autoindex_module", "--without-http_browser_module", "--without-http_charset_module", "--without-http_fastcgi_module", "--without-http_memcached_module", "--without-http_referer_module", "--without-http_scgi_module", "--without-http_split_clients_module" ], "log_dir":"/var/log/nginx", "binary":"/opt/nginx/sbin/nginx", "source":{ "prefix":"/opt/nginx/dist", "modules":["http_ssl_module", "http_gzip_static_module" ] } }, The chef log shows: [2012-12-19T02:37:44+00:00] INFO: Processing bash[compile_nginx_source] action run (nginx::source line 82) [2012-12-19T02:37:45+00:00] INFO: bash[compile_nginx_source] ran successfully I am clueless on what's going on :(

    Read the article

  • Can we use JCR API over MySQL?

    - by n_mysql
    Apache Jackrabbit (or the JCR API) helps you separate the data store from the data management system. This would mean that every data store provider would have to implement the JCR API for his own data store. The question is JCR implemented for MySQL? Can we use the JCR API over MySQL? I want to truly abstract out where i store my content, so that tomorrow if i don't want to use a relational DB i can swap it out with the file system with ease.

    Read the article

  • Simple CGI server

    - by Egor Makarov
    How can I run my perl CGI script without apache? This is for testing purposes, so some kind of single-process server that processes only one request at time should be enough for me.

    Read the article

  • 32bit domU on 64bit dom0

    - by ModuleC
    I'm using 64bit Centos Dom0 with: 2.6.18-164.15.1.el5xen #1 SMP Wed Mar 17 12:04:23 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux recently i migrated some 32bit Centos domUs on this node. As by specs 32bit domUs should work with 64bit dom0. DomUs are paravirtualized, and everything works, except iptables limit. Anyways runing csf on domU will return following messages in dmesg: ip_tables: (C) 2000-2006 Netfilter Core Team Netfilter messages via NETLINK v0.30. ip_conntrack version 2.4 (2080 buckets, 16640 max) - 304 bytes per conntrack ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 Doing lsmod on both dom0 and domU is listing all iptables modules required as loaded. I've found this http://www.mail-archive.com/[email protected]/msg189433.html But didn't find anything for centos on this issue. Am I missing something?

    Read the article

  • udev: waiting for uevents to be processed on my Gentoo

    - by stan31337
    During the startup I see machine executing this thing for about 30 seconds: udev: waiting for uevents to be processed Then I get a quick message which says something like: devfs: timeout (50 seconds) I can't see the whole thing because after that system starts up very fast including Xfce. What logs and configs do I need to provide for further investigation? $uname -a Linux genta 3.6.6-gentoo #1 SMP Sun Nov 11 11:02:23 NOVT 2012 i686 Genuine Intel(R) CPU T2300 @ 1.66GHz GenuineIntel GNU/Linux Thank you! UPD: rc-status genta / # rc-status sysinit Runlevel: sysinit dmesg [ started ] udev [ started ] devfs [ started ] genta / # rc-status boot Runlevel: boot hwclock [ started ] modules [ started ] fsck [ started ] root [ started ] mtab [ started ] localmount [ started ] sysctl [ started ] bootmisc [ started ] hostname [ started ] termencoding [ started ] keymaps [ started ] net.lo [ started ] swap [ started ] urandom [ started ] procfs [ started ]

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • How to view only Mail on shared email account?

    - by TomatoSandwich
    I have a support account which I should have access to in my Outlook 2010, however, since changing from 2003, the situation has become unusual. I used to be able to just view shared mail items in a seperate account, without it having the Calendar, Tasks and Reminders popping up in my face all hours of the day. Now, if I add the account, I get upwards of 60 task reminders that are not my personal account, and that clog up my Reminders window and task list. Is there a way to show only my Tasks and Reminders in Outlook 2010? I've tried the Advanced Filter option on the Tasks list, but if I set it to show only things from or to myself, everything disappears, or nothing disappears. I tried looking in the email account settings for something like 'Read email only' or something to do with only showing some of the modules of outlook, but it was useless.

    Read the article

  • HTTP request, strange socket behavoir

    - by hoodoos
    I expirience strange behavior when doing HTTP requests through sockets, here the request: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 10926 Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml Later on there goes body of my request with given content length. After that I recive something like: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT So everything seem to be fine here. I read all contents from network stream and successfuly recieve response. But my socket which I'm doing polling on switches it's modes like that: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really ) read ( here it switches to read back again ) last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 1076 Accept-Encoding: gzip Content-Encoding: gzip Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml I recieve response like that: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Encoding: gzip Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 07:26:33 GMT 2000 ? I recieve a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! ) What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something? PS All actual service references and login/passwords replaced with fake ones :)

    Read the article

  • Jetty 7 NIO issue?

    - by Continuation
    I saw this test showing Jetty 7's performance drops drastically when switched from blocking IO to NIO (95% drop): http://wiki.apache.org/HttpComponents/HttpCoreBenchmark Is this a known issue? Have you experienced it first hand? Should I be avoiding NIO on Jetty?

    Read the article

  • PHP 5.2.11: Unable to load dynamic library complaints

    - by OldTroll
    I installed PHP 5.2.11 and upon enabling the php_mssql.dll module (and the obligatory restart of the services) started receiving warnings about not being able to load the dynamic library. The operating environment is IIS ISAPI under Windows 2003 R2. Here is the list of things that were checked File Exists in \PHP\ext directory and has the same timestamp as the other distribution delivered libraries. Permissions are correct to the file. Other extensions are enabled and functioning with identical permissions. ntwdblib.dll has been copied from a SQL Server 2005 installation (per php.net's requirements for the module) Other modules before and after the php_mssql entry are working. I've also stripped down and remove all non-essential references with detectable improvements. I turned up logging and enabled all the error logs in the php.ini, nothing is generated by the logs. Searching Google has turned up a slew of closed/unsolved bug reports on php.net and some miscellaneous near-matches, none of which have really addressed the problem or lead to any inspirations.

    Read the article

  • Beans, Lists and JSP

    - by Luigi 1982
    Hi at All I have a little question... On my JSP page I have a List of beans. I want to extract a sublist of beans with a specific property (Ex. all Horror books). Can Apache Beanutils help me? Thanks in advance...

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • why jsessionid is appended to each url?

    - by sword101
    greetings all i am deploying an app using spring framework on the apache tomcat when running the application from the tomcat directly,there's no jsessionid appended to any url at all but after mapping the application to the domain,and trying to run it i got a jsessionid appended to each url in the application,i tried the spring security attribute disable-url-rewriting but it doesn't work,it removes the jsessionid from the url but the application doesn't work no more,the user cannot login. so i guess it's another problem,any ideas why this happens,how to solve it? thanks.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >