Search Results

Search found 5670 results on 227 pages for 'zend router'.

Page 154/227 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Disable error_log. Error_log flooding

    - by user36646
    Hello, i got an webserver running and old version of gambio (xt:commerce fork). The error_log in the dir over the public_html is flooding with errors. About 30mb in 15min. How can I disable this log? I can't fix all the errors. Here are a few examples of the errors: [warn] mod_fcgid: stderr: PHP Notice: Undefined variable: key in /usr/www/users/foo//includes/classes/class.inputfilter.php on line 98 [warn] mod_fcgid: stderr: PHP Notice: Undefined index: in /usr/www/users/foo/templ [warn] mod_fcgid: stderr: in /usr/www/users/foo/templates/gambio/source/inc/xtc_show_category_sectionc.inc.php on line 47 They are all errors of: "mod_fcgid: stderr". I tried to grep "error_log" and "error_report" in the public html dir, but i did not find anything. Here is a part from the phpinfo(): PHP Version 4.4.9 System Linux foobar.com 2.6.26-2-686-bigmem #1 SMP Sat Dec 26 09:26:36 UTC 2009 i686 Build Date Feb 11 2010 13:00:33 Configure Command './configure' '--prefix=/usr/local/php4' '--with-config-file-path=/etc/php4/cgi' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-tiff-dir' '--with-ttf' '--enable-force-cgi-redirect' '--enable-safe-mode' '--with-zlib' '--enable-ftp' '--enable-url-includes' '--enable-gd-native-ttf' '--enable-trans-sid' '--enable-dbase' '--with-db4' '--with-ldap' '--enable-bcmath' '--enable-calendar' '--enable-memory-limit' '--with-mcal=/usr' '--with-bz2' '--with-mod-dav' '--enable-sockets' '--with-kerberos' '--with-imap-ssl' '--enable-gd-imgstrttf' '--with-freetype-dir' '--with-curl' '--with-mysql' '--with-mhash' '--with-gdbm' '--with-pgsql' '--with-gettext' '--with-xml' '--with-mcrypt' '--with-openssl' '--with-dom' '--without-pear' '--enable-exif' '--with-zip' '--enable-wddx' '--disable-cli' '--enable-fastcgi' '--with-imap' '--enable-xslt' '--with-xslt-sablot=/usr/local/lib' '--enable-mbstring' '--with-dom-xslt' '--with-dom-exslt' Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /home/httpd/php-ini/foo/php.ini PHP API 20020918 PHP Extension 20020429 Zend Extension 20050606 Debug Build no Zend Memory Manager enabled Thread Safety disabled Registered PHP Streams php, http, ftp, https, ftps, compress.bzip2, compress.zlib **Configuration PHP Core** Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output & & asp_tags Off Off auto_append_file no value no value auto_prepend_file no value no value browscap no value no value default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors Off Off doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 2039 2039 expose_php On On extension_dir /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 file_uploads On On gpc_order GPC GPC highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .:/usr/local/lib/php/ .:/usr/local/lib/php/ log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc On On magic_quotes_runtime Off Off magic_quotes_sybase Off Off max_execution_time 120 120 max_input_nesting_level 500 500 max_input_time -1 -1 memory_limit 128000000 128000000 open_basedir /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ output_buffering no value no value output_handler no value no value post_max_size 128000000 128000000 precision 14 14 register_argc_argv On On register_globals Off Off report_memleaks On On safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path /usr/sbin/sendmail -t /usr/sbin/sendmail -t serialize_precision 100 100 short_open_tag On On SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128000000 128000000 upload_tmp_dir /usr/foo/foo/.tmp /usr/foo/.tmp user_dir no value no value variables_order EGPCS EGPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance Off Off

    Read the article

  • redhat Apache fast-cgi selinux permissions

    - by Alejo JM
    My apache installation is running php as fastcgi, and the virtual hosts are pointing to /home/*/public_html. and the fastcgi are home/*/cgi-bin/php.fcgi the public_html setup with selinux was: /usr/sbin/setsebool -P httpd_enable_homedirs 1 chcon -R -t httpd_sys_content_t /home/someuser/public_html The owner and group are the user, for example the user "someuser": ls -all /home/someuser/cgi-bin/ drwxr-xr-x 2 someuser someuser 4096 Sep 7 13:14 . drwx--x--x 6 someuser someuser 4096 Sep 6 18:17 .. -rwxr-xr-x 1 someuser someuser 308 Sep 7 13:14 php.fcgi ls -all /home/someuser/public_html/ | greep info.php -rw-r--r-- 1 someuser someuser 24 Sep 3 16:24 info.php When is visits the site I get "Forbidden ..." and the log said: [Fri Sep 07 12:02:51 2012] [error] [client x.x.x.x] (13)Permission denied: access to /cgi-bin/php.fcgi/info.php denied My selinux conf is: SELINUX=enforcing SELINUXTYPE=targeted SETLOCALDEFS=0 So I kill Selinux (SELINUX=disabled), reboot the system and everything works !!!!! The problem is Selinux, I don't want disable Selinux. I trying this with no success: setsebool -P httpd_enable_cgi 1 chcon -t httpd_sys_script_exec_t /home/someuser/cgi-bin/php.fcgi chcon -R -t httpd_sys_content_t /home/someuser/cgi-bin Or maybe is better change Selinux SELINUX=enforcing to SELINUX=permissive And disable selinux for httpd ? (I think I better find the correct configuration) Thanks for any suggestion on this matter My environment: Red Hat Enterprise Linux Server release 5.8 (Tikanga) Server version: Apache/2.2.3 PHP 5.1.6 (cli) (built: Jun 22 2012 06:20:25) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies Some logs: ps -ZC httpd LABEL PID TTY TIME CMD system_u:system_r:httpd_t 2822 ? 00:00:00 httpd system_u:system_r:httpd_t 2823 ? 00:00:00 httpd system_u:system_r:httpd_t 2824 ? 00:00:00 httpd system_u:system_r:httpd_t 2825 ? 00:00:00 httpd system_u:system_r:httpd_t 2826 ? 00:00:00 httpd system_u:system_r:httpd_t 2836 ? 00:00:00 httpd system_u:system_r:httpd_t 2837 ? 00:00:00 httpd system_u:system_r:httpd_t 2838 ? 00:00:00 httpd system_u:system_r:httpd_t 2839 ? 00:00:00 httpd system_u:system_r:httpd_t 2840 ? 00:00:00 httpd getsebool -a | grep httpd allow_httpd_anon_write --> off allow_httpd_bugzilla_script_anon_write --> off allow_httpd_cvs_script_anon_write --> off allow_httpd_mod_auth_pam --> off allow_httpd_nagios_script_anon_write --> off allow_httpd_prewikka_script_anon_write --> off allow_httpd_squid_script_anon_write --> off allow_httpd_sys_script_anon_write --> off httpd_builtin_scripting --> on httpd_can_network_connect --> off httpd_can_network_connect_db --> off httpd_can_network_relay --> off httpd_can_sendmail --> on httpd_disable_trans --> off httpd_enable_cgi --> on httpd_enable_ftp_server --> off httpd_enable_homedirs --> on httpd_execmem --> off httpd_read_user_content --> off httpd_rotatelogs_disable_trans --> off httpd_setrlimit --> off httpd_ssi_exec --> off httpd_suexec_disable_trans --> off httpd_tty_comm --> on httpd_unified --> on httpd_use_cifs --> off httpd_use_nfs --> off

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • phpMyAdmin setup issues

    - by EquinoX
    I am trying to follow the tutorial here to setup the user and pass. It says there that "this section is only applicable if your MySQL server is running with --skip-show-database". First question is, how do I check if MySQl server is running with --skip-show-database? Is there any way I can access phpMyAdmin SQL query window without logging in? Otherwise I'd have to execute this SQL from command line. I am also getting this: Cannot load mcrypt extension. Please check your PHP configuration. I have added mcrypt.so to php.ini and doing the following command proves that I have it. [root@DT html]# rpm -qa | grep mcrypt mcrypt-2.6.8-1.el5 php-mcrypt-5.3.5-1.1.w5 libmcrypt-2.5.8-4.el5.centos [root@DT html]# php -v PHP 5.3.5 (cli) (built: Feb 19 2011 13:10:09) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies Now when I go to phpinfo() and search for mcrypt it can find it inside the Configure Command row ('--with-mcrypt=shared,/usr'). So, what to do next?. UPDATE: I didn't put extension=mcrypt.so in php.ini as it will complain the following: PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 Here's my nginx.conf: #user nobody; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; server { listen 80; root /usr/share/nginx/html; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { #root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { #root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root /usr/local/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script _name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • Wireless is detected, but not connecting. Ethernet works. How to correct the wireless address?

    - by Lucas
    I am running Ubuntu 14.04 with cable internet, and my wireless is detected and connected, but I cannot connect to the internet. I know the problem is with my machine because other machines are connecting to the same router just fine. I can connect via ethernet just fine as well. Here are some notable tests: ping 192.168.0.105 works with 0% packet loss, but ping 192.168.0.1 has 100% packet loss. When I plug in my ethernet, ping 192.168.0.1 works with 0% packet loss. My wireless name is tg, and the router ip is 192.168.0.1 (where I can enter username and password). I suspect that I need to change my wireless address from 192.168.0.105 to 192.168.0.1. Any suggestions on how to proceed? extra info: [lucas@lucas-ThinkPad-W520]/home/lucas$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"tg" Mode:Managed Frequency:2.462 GHz Access Point: 00:02:6F:83:F8:F4 Bit Rate=1 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=62/70 Signal level=-48 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:52 Invalid misc:166 Missed beacon:0 [lucas@lucas-ThinkPad-W520]/home/lucas$ ifconfig eth0 Link encap:Ethernet HWaddr f0:de:f1:b2:53:53 inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f2de:f1ff:feb2:5353/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:980003 errors:0 dropped:0 overruns:0 frame:0 TX packets:498384 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1320506168 (1.3 GB) TX bytes:59780591 (59.7 MB) Interrupt:20 Memory:f3a00000-f3a20000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:21927 errors:0 dropped:0 overruns:0 frame:0 TX packets:21927 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1781719 (1.7 MB) TX bytes:1781719 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 24:77:03:29:8f:dc inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::2677:3ff:fe29:8fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11828 errors:0 dropped:0 overruns:0 frame:0 TX packets:15444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4855662 (4.8 MB) TX bytes:2250585 (2.2 MB) [lucas@lucas-ThinkPad-W520]/home/lucas$ lspci -nn | grep 0280 03:00.0 Network controller [0280]: Intel Corporation Centrino Ultimate-N 6300 [8086:4238] (rev 3e) [lucas@lucas-ThinkPad-W520]/home/lucas$ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no with ethernet unplugged: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0 with ethernet plugged in: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 [lucas@lucas-ThinkPad-W520]/home/lucas$ nm-tool NetworkManager Tool State: connected (global) - Device: wlan0 [tg] ---------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: no HW Address: 24:77:03:29:8F:DC Capabilities: Speed: 52 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) tatum: Infra, 40:8B:07:D8:A5:04, Freq 2437 MHz, Rate 54 Mb/s, Strength 42 W PA WPA2 ums: Infra, 00:20:A6:72:52:BF, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 Alpha 40: Infra, 28:CF:E9:86:59:5D, Freq 5260 MHz, Rate 54 Mb/s, Strength 30 W PA WPA2 thepromiselan: Infra, 58:6D:8F:51:E5:54, Freq 2452 MHz, Rate 54 Mb/s, Strength 34 $ PA WPA2 xfinitywifi: Infra, 06:1D:D5:84:27:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 52 *tg: Infra, 00:02:6F:83:F8:F4, Freq 2462 MHz, Rate 54 Mb/s, Strength 73 W PA2 ums: Infra, 00:20:A6:A1:9F:25, Freq 2452 MHz, Rate 54 Mb/s, Strength 44 BRIAN-PC_Network:Infra, 20:AA:4B:DD:93:D6, Freq 2462 MHz, Rate 54 Mb/s, Strength 35 W PA2 HOME-C0F8: Infra, 44:32:C8:D2:C0:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 40 W PA WPA2 abcsexy: Infra, 28:28:5D:27:5D:85, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 W PA WPA2 IPv4 Settings: Address: 192.168.0.105 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: F0:DE:F1:B2:53:53 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1

    Read the article

  • Slow wired internet connection on Realtek RTL8168-8111 (Rev 6)

    - by Mark
    I keep on seeing issues people are having with their wireless. I am having an issue with being wired into my router. I installed Ubuntu 11.10 the other day, on my custom PC. The motherboard I have installed on this PC is an ASUS P8H61-M. The issue I am having is with my speed. I have a dual boot, windows 7 and the new Ubuntu. On my Windows install I am getting test speeds from Speakeasy at 17Mbps and actual downloads around 2-3MB/s. With Ubuntu, I am getting test speeds from Speakeasy at 1.14Mbps and actual downloads around 60KB/s. I have disabled IPv6 and am no using GoogleDNS for my DNS, but it hasn't resolved the issue. I have scanned my router (WRT54GS Linksys) to disable IPv6 connections, and I am not seeing any option for that. I cannot figure out why I am getting such sluggish internet connection. Any help to resolve would be great! I performed an iconfig -a with these results: mark@Mark-ASUS:~$ ifconfig -a eth0 Link encap:Ethernet HWaddr f4:6d:04:d1:2c:4e inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::f66d:4ff:fed1:2c4e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21888 errors:0 dropped:21888 overruns:0 frame:21888 TX packets:21068 errors:0 dropped:90 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26348337 (26.3 MB) TX bytes:2217140 (2.2 MB) Interrupt:46 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:952 (952.0 B) TX bytes:952 (952.0 B) My specs are: mark@Mark-ASUS:~$ sudo lspci -nn 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) 06:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1080] (rev 01) udev information: KERNEL[11.351405] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 UDEV [11.363905] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 ID_VENDOR_FROM_DATABASE=Realtek Semiconductor Co., Ltd. ID_MODEL_FROM_DATABASE=RTL8111/8168B PCI Express Gigabit Ethernet controller ID_BUS=pci ID_VENDOR_ID=0x10ec ID_MODEL_ID=0x8168 ID_MM_CANDIDATE=1 dmesg information: [ 2.855982] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 2.856366] r8169 0000:04:00.0: eth0: RTL8168b/8111b at 0xffffc9000064c000, f4:6d:04:d1:2c:4e, XID 0c900800 IRQ 46 [ 12.540956] r8169 0000:04:00.0: eth0: link down [ 12.540961] r8169 0000:04:00.0: eth0: link down [ 12.541173] ADDRCONF(NETDEV_UP): eth0: link is not ready I took out a lot of information that did not pertain to eth0, because the previous edits wouldn't save. If I need any more information, let me know. I would love to get this resolved. The other issue I am noticing is sometimes my connection disconnects all together, for about a minute, then it reconnects.

    Read the article

  • Obtaining MAC address

    - by rink.attendant.6
    According to Obtain client MAC address in ASP.NET Application, it is not possible. I am not entirely convinced because whenever I connect to Tim Hortons WiFi, my MAC address is known. Occasionally, the network is slow and I see this URL like this before being redirected to the Connect page: http://timhortonswifi.com/cp/tdl3/index.asp ?cmd=login &switchip=172.30.129.73 &mac=60:6c:66:17:1a:83 &ip=10.40.66.229 &essid=Tim%20Hortons%20WiFi &apname=TDL-ON-NEP-02177-WAP1 &apgroup=02177 &url=http%3A%2F%2Fweather%2Egc%2Eca%2Fcity%2Fpages%2Fon-72_metric_e%2Ehtml So according to this URL, the site knows the IP address of the router, my MAC address, the IP address assigned to my device by the router, the network SSID, some other pieces of information, and the URL I was trying to access prior to connecting. There's two options: Tim Hortons WiFi Basic and Tim Hortons WiFi Plus, where the "Plus" option allows me to connect to any Tim Hortons WiFi access point in Canada automatically with this device. Registration requires an email address, so I'm assuming this is possible by checking the MAC address and storing it in a database that routers ping upon connection. More info here. According to the extension of this page, I can safely assume it is ASP. How are they obtaining this information?

    Read the article

  • CakePHP Routes: Messing With The MVC

    - by thesunneversets
    So we have a real-estate-related site that has controller/action pairs like "homes/view", "realtors/edit", and so forth. From on high it has been deemed a good idea to refactor the site so that URLS are now in the format "/realtorname/homes/view/id", and perhaps also "/admin/homes/view/id" and/or "/region/..." As a mere CakePHP novice I'm finding it difficult to achieve this in routes.php. I can do the likes of: Router::connect('/:filter/h/:id', array('controller'=>'homes','action'=>'view')); Router::connect('/admin/:controller/:action/:id'); But I'm finding that the id is no longer being passed simply and elegantly to the actions, now that controller and action do not directly follow the domain. Therefore, questions: Is it a stupid idea to play fast and loose with the /controller/action format in this way? Is there a better way of stating these routes so that things don't break egregiously? Would we be better off going back to subdomains (the initial method of achieving this type of functionality, shot down on potentially spurious SEO-related grounds)? Many thanks for any advice! I'm sorry that I'm such a newbie that I don't know whether I'm asking stupid questions or not....

    Read the article

  • Creating a RESTful service in CakePHP

    - by NathanGaskin
    I'm attempting to create a RESTful service in CakePHP but I've hit a bit of a brick wall. I've enabled the default RESTful routing using Router::mapResources('users') and Router::parseExtensions(). This works well if I make a GET request, and returns some nicely formatted XML. So far so good. The problem is if I want to make a POST or PUT request. CakePHP doesn't seem to be able to read the data from the request. At the moment my add(), edit() and delete() actions don't contain any logic, they're simply setting $this-data to the view. I'm testing with the following cURL command: curl -v -d "<user><username>blahblah</username><password>blahblah</password>" http://localhost/users.xml --header 'content-type: text/xml' Which only returns a 404 header. If I remove the --header parameter then it returns the view but no data is set. It feels like I'm missing something obvious here. Any ideas?

    Read the article

  • Setting up magic routes for plugins in CakePHP 1.3?

    - by Matt Huggins
    I'm working on upgrading my project from CakePHP 1.2 to 1.3. In the process, it seems that the "magic" routing for plugins by which a controller name (e.g.: "ForumsController") matching the plugin name (e.g.: "forums") no longer automatically routes to the root of the plugin URL (e.g.: "www.example.com/forums" pointing to plugin "forums", controller "forums", action "index"). The error message given is as follows: Error: ForumsController could not be found. Error: Create the class ForumsController below in file: app/controllers/forums_controller.php <?php class ForumsController extends AppController { var $name = 'Forums'; } ?> In fact, even if I navigate to "www.example.com/forums/forums" or "www.example.com/forums/forums/index", I get the same exact error. Do I need to explicitly set up routes to every single plugin I use? This seems to destroy a lot of the magic I like about CakePHP. I've only found that doing the following works: Router::connect('/forums/:action/*', array('plugin' => 'forums', 'controller' => 'forums')); Router::connect('/forums', array('plugin' => 'forums', 'controller' => 'forums', 'action' => 'index')); Setting up 2 routes for every single plugin seems like overkill, does it not? Is there a better solution that will cover all my plugins, or at least reduce the number of routes I need to set up for each plugin?

    Read the article

  • DELETE causing a redirect loop?

    - by robocode
    I have an express app with a postgres backend where a user can add/delete recipes, and each time they do so they get an updated list of recipes. Adding a recipe is fine, but when I delete one it seems to get stuck in a redirect loop. In app.js I have router.get('/delete/:d', delRec.deleteRecipe); which calls the following code exports.deleteRecipe = function(req, res){ pg.connect(conString, function(err, client) { client.query('DELETE FROM recipes WHERE recipe_name = ', [req.params.d], function(err, result) { if(err) { return console.error('error running query', err); } else if (result) { pg.end(); console.log('deleting'); } }); }); res.redirect('recipes'); }; If I try delete a recipe, console.log('deleting') produces deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting deleting The recipes route is below (sorry that it's so convoluted) router.get('/recipes', function(req, res) { pg.connect(conString, function(err, client) { if(err) { return console.error('could not connect to postgres', err); } client.query('SELECT * FROM recipes', function(err, result) { if(err) { return console.error('error running query', err); } recipes = result.rows; for(var d in recipes) { if (recipes.hasOwnProperty(d)) { recipeList[d] = recipes[d].recipe_name; } } res.render('recipes', {recipes: recipes, recipeList: recipeList}); }); }); });

    Read the article

  • How do I send data between two computers over the internet

    - by Johan
    I have been struggling with this for the entire day now, I hope somebody can help me with this. My problem is fairly simple: I wish to transfer data (mostly simple commands) from one PC to another over the internet. I have been able to achieve this using sockets in Java when both computers are connected to my home router. I then connected both computers to the internet using two different mobile phones and attempted to transmit the data again. I used the mobile phones as this provides a direct route to the internet and if I use my router I have to set up port forwarding, at least, that is how I understand it. I think the problem lies in the method that I set up the client socket. I used: Socket kkSocket = new Socket(ipAddress, 3333); where ipAddress is the IP address of the computer running the server. I got the IP address by right-clicking on the connection, status, support. Is that the correct IP address to use or where can I obtain the address of the server? Also, is it possible to get a fixed name for my computer that I can use instead of entering the IP address, as this changes every time I connect to the internet using my mobile phone? Alternatively, are there better methods to solving my problem such as using http, and if so, where can I find more information about this? Thanks!

    Read the article

  • Why Backbone.js isn't binding my event

    - by Saif Bechan
    I have a router like this, as main entry point: window.AppRouter = Backbone.Router.extend({ routes: { '': 'login' }, login: function(){ userLoginView = new UserLoginView(); } }); var appRouter = new AppRouter; Backbone.history.start({pushState: true}); I have a model/collection/view like this: window.User = Backbone.Model.extend({}); window.Users = Backbone.Collection.extend({ model: User }); window.UserLoginView = Backbone.View.extend({ events: { 'click #login-button': 'loginAction' }, initialize: function(){ _.bindAll(this, 'render', 'loginAction'); }, loginAction: function(){ var uid = $("#login-username").val(); var pwd = $("#login-password").val(); var user = new User({uid:uid, pwd:pwd}); } }); And body of my HTML looks like this: <form action="#" method="POST" id="login-form"> <p> <label for="login-username">username</label> <input type="text" id="login-username" autofocus /> </p> <p> <label for="login-password">password</label> <input type="password" id="login-password" /> </p> <a id="login-button" href="#">Inloggen</a> </form> Note: The HTML comes from Node.js using express.js, should I maybe wait for a document ready event somewhere? Edit: I have tried this, create the view when ready, did not solve the problem. $(function(){ userLoginView = new UserLoginView(); });

    Read the article

  • Benefits of "Don't Fragment" on TCP Packets?

    - by taspeotis
    One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops). I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet. This affects 1 client in 5000, but since everybody's routes will be different this is expected. The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer. So... Are there are benefits OR justification to setting the DF flag on TCP packets for application data? I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem. If there's no good justification I will be changing the behaviour of our application. Thanks in advance.

    Read the article

  • Receiving UDP on different Android phones gives different results

    - by user1868982
    I am willing to create a server and client program on my android mobile devices. The devices communicate with each other on the same wifi network, therefore, some simple scanning mechanism must be implemented - The client phones search for a server phone through some kind of broadcast. What I did: My protocol - the client phone broadcasts a message port p on the wifi, the server listens on port p. when the server gets the broadcast message it sends a message back, therefore discovering itself to the client. My code - I have opened a broadcast socket on my app, it sends a broadcast message. Meanwhile there is a python script on my PC that listens and replies - I use python so that my testing will be easier - Wireshark on the PC and I can see everything. What happens: When I use one of my Galaxy S phones - it works and I get a response. When I use the other Galaxy S phone - it doesn't work. Now this is what I know: The phone that works actually has Nexus ROM on it Ver. 4.1.1 The phone that doesn't work has 2.3.3 regular galaxy ROM The python code says it receives both of the broadcasts sent from both phones, and replies to both of them without raising any exception. So far I was thought the problem may be 1. the older version'd phone. 2. the windows firewall 3. the router firewall So I have opened Wireshark, and Indeed I saw that both phones are sending their broadcasts - it was logged on Wireshark. But the python script only responded to the first one. So this is why 1 & 3 are irrelevant - if the router firewall was blocking my UDP I would have still seen the python server response, same with the older versioned phone. To get rid of 2 i just disabled the windows firewall - still same problem. Does anyone has a clue to why this effect might happen? Thanks!

    Read the article

  • Protected object this object on the rompager server is protected

    - by Sami-L
    I have a windows home server, when I connect to any web site in it I get an authentication window with the next message: "http://mydomain.com site requires user name and password, the site says: "SmartAX". Then when I close the window I get an error page saying: Protected object this object on the rompager server is protected Could you please have an idea on this, have it relation with ADSL router ?

    Read the article

  • SSL handshake failed SVN Checkout

    - by Webnet
    I'm using https://IP:PORT/svn/web/repository to checkout remotely from our network here at the office and I'm getting the error... SSL handshake failed: SSL error: unknown protocol My concern is that I've verified that SSL is running and listening on the port I'm specifying. Now.... the port I've specified is forwarded through the office router to the SVN server. Any ideas on what could be going wrong?

    Read the article

  • Looking for home networking hardware and software advice

    - by phobos7
    Note: I originally wrote this up in a blog post. I've removed any affiliate links that I put in my original post to ensure I don't annoy anybody. I've recently moved home and I now need to go to the trouble of sorting out my home network yet again. We had Virgin broadband in Hertford but you can't get Virgin in the street we've moved to so I've had to go with O2 Broadband. Normally I prefer to use my own hardward, and previously used the DLink DIR-655 router which was great, but in this situation I am using the O2 Wirelss Box III since I only have an old Netgear DG834PN Wireless G modem router and I'd rather be using Wireless N. Anyway, the place we have moved into has only one phone point in the hallway, has the best TV point in one room and the best place to put the TV and other entertainment stuff in yet another room. So, networking the house up for Internet and TV is required. The diagram below shows the things that I'll have in my home network but there are three points where I'm not quite sure what hardware to us. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a Media Centre/PC and a couple of consoles to the network. I'm pretty much settled on us an Acer Aspire Revo R3600 as my media PC, probably with Ubuntu or Windows and XBMC installed. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a device that can decode and stream TV from a TV aerial across the network. The device that is connected to 2). At the moment I'm considering a HDHomeRun by SiliconDust. At the moment I'm considering either the TP LINK TL-WA701ND 150Mbps Wireless Lite N Access Point (very cheap at Amazon) or the Netgear 5 GHz Wireless-N HD Access Point/Bridge. I'd love to get some insight into what you would do in my situation. What Wireless Access Point/Bridge should I put at points 1) and 2)? What device should I choose for point 3) that can decode and stream a TV signal? Is the Acer Aspire Revo R3600 a good choice? ![alt text][6] Note 2: I've also posted this question on AVForums.

    Read the article

  • SSL handshake failed

    - by Webnet
    I'm using https://IP:PORT/svn/web/repository to checkout remotely from our network here at the office and I'm getting the error... SSL handshake failed: SSL error: unknown protocol My concern is that I've verified that SSL is running and listening on the port I'm specifying. Now.... the port I've specified is forwarded through the office router to the SVN server. Any ideas on what could be going wrong?

    Read the article

  • How to setup a static multicast ARP entry with Cisco SG300?

    - by Fredrik Hedberg
    We're running a Microsoft NLB cluster in multicast mode as a loadbalancer. Using our old Cisco IOS switches we propagate access to the cluster to our branches using a static ARP entry in the core router: arp 10.20.1.226 03bf.0a14.01e2 ARPA But how does one solve this using non-IOS based Cisco hardware such as the SG300 series? Adding a static ARP entry results in an error message telling the user that the hardware address needs to be a valid unicast MAC address.

    Read the article

  • How to put the wamp server online?

    - by srisar
    hay i have setup the wamp server so now im ready to put it online. but when i put it online and also forward the port 80, now when i point the browser to the wan address of mine, it showing my router's home page???!!!! i dont know how to slove it

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >