Search Results

Search found 15811 results on 633 pages for 'path manipulation'.

Page 506/633 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • amavisd + postfix + dovecot blocks gif images

    - by David W
    I occasionally have a client who tries to email me and says his email gets blocked by my server. When I check the logs, I see this: Sep 6 18:12:52 myers amavis[15197]: (15197-08) p.path BANNED:1 [email protected]: "P=p003,L=1,M=multipart/mixed | P=p002,L=1/2,M=application/ms-tnef,T=tnef,N=winmail.dat | P=p004,L=1/2/1,T=image,T=gif,N=image001.gif,N=image001.gif", matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" And then a little later... Sep 6 18:12:58 myers amavis[15197]: (15197-08) Blocked BANNED (.image,.gif,image001.gif,image001.gif), [213.199.154.205] [157.56.236.229] <[email protected]> - > <[email protected]>, quarantine: banned-g4QhZGvwJvDF, Message-ID <6A9596BE385EC1499F83E464FA9ECCA20C668320@BY2PRD0611MB417.namprd06.prod.outlook.com>, mail_id: g4QhZGvwJvDF, Hits: -, size: 20916, 8439 ms From this and the bounce that he forwards me (to a different address I give him), I determine that its bouncing because of the file in his signature (image001.gif). However, that does NOT match the "key" in this part of the log: matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" Furthermore, the .gif extension is nowhere to be found in the /etc/amavisd.conf file (i.e. I'm not blocking emails because they contain .gif images). Am I missing something here? This is strange... and annoying.

    Read the article

  • Apache 2 proxy for Tomcat 7

    - by hsnm
    Following the how-to, I wanted to make a proxy for traffic to the address /app to be processed by Tomcat 7. I added this to my httpd.conf: LoadModule proxy_module {path-to-modules}/mod_proxy.so LoadModule deflate_module modules/mod_deflate.so ProxyPass /app http://localhost:8081 ProxyPassReverse /app http://localhost:8081 I also have this on my server.xml: <Connector port="8081" enableLookups="false" acceptCount="100" connectionTimeout="20000" proxyName="localhost" proxyPort="80"/> And I have the folder /var/lib/tomcat7/webapps/app with my application files. I restarted both Tomcat 7 and Apache 2 after doing the configurations above. Problem: When navigating to my webpage.com/app, I get the error 500. I consulted apache logs, it says: [warn] proxy: No protocol handler was valid for the URL /app. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. Update: This is running in ubuntu. I resolved this by adding LoadModule proxy_http_module modules/mod_proxy_http.so to my httpd.conf. Now I have another question: How can I make this proxy to work on SSL through port 443?

    Read the article

  • Ruby 1.9.3 - Bundler - Graylog2

    - by Arenstar
    im having a strange problem with bundler. Using ruby 1.8 the following works fine however not with 1.9 it always results in Could not find rake-0.9.2.2 in any of the sources Run `bundle install` to install missing gems. i dont understand why, but it functions correctly with rvm. I can not however use rvm, this is not a solution to my problem Install Ruby cd /usr/local/src wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p194.tar.gz tar xzf ruby-1.9.3-p194.tar.gz && cd ruby-1.9.3-p194 ./configure --prefix=/opt/lp/ruby-1.9.3-test make all && make install Install Graylog cd /usr/local/src wget https://github.com/downloads/Graylog2/graylog2-web-interface/graylog2-web-interface-0.9.6p1.tar.gz tar xzf graylog2-web-interface-0.9.6p1.tar.gz cd graylog2-web-interface-0.9.6p1 Setup Graylog cd /usr/local/src/graylog2-web-interface-0.9.6p1 sed -i "3 i gem 'thin', '~> 1.3.1'" Gemfile /opt/lp/ruby-1.9.3-test/bin/gem install bundle /opt/lp/ruby-1.9.3-test/bin/bundle install --path vendor/bundle --binstubs Begin the Test cd /usr/local/src/graylog2-web-interface-0.9.6p1 /opt/lp/ruby-1.9.3/bin/bundle exec bin/rake #Could not find rake-0.9.2.2 in any of the sources #Run `bundle install` to install missing gems. cd /usr/local/src/graylog2-web-interface-0.9.6p1 /opt/lp/ruby-1.9.3/bin/bundle exec bin/thin -e production -S test.sock -c . -R config.ru start #Could not find rake-0.9.2.2 in any of the sources #Run `bundle install` to install missing gems. Where am i going wrong?

    Read the article

  • windows 7 64 bit visual studio 2008 libtiff build nmake error

    - by user1244539
    I am trying to build tiff 4.0.2 on my Windows 7 x64 system with Visual Studio 2008, but it was showing errors: C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2347) : error C2061: syntax error : identifier 'QINT' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2362) : error C2059: syntax error : '}' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2061: syntax error : identifier 'JOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2061: syntax error : identifier 'PJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2061: syntax error : identifier 'NPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2061: syntax error : identifier 'LPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2146: syntax error : missing ')' before identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2081: 'LPJOYCAPSA' : name in formal parameter list illegal C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2061: syntax error : identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ',' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ')' NMAKE: fatal error u1077: "c:\program files(x86)\microsoft visual studio 9.0\vc\bin\cl.exe": return code '0x2' This is what I was doing: Extracted tiff 4.0.2 In VS 2008 x64 Win 7 command prompt setting the environment for x86 by running vcvars32.bat Changing the path to tiff 4.0.2/libtiff folder Running nmake /f makefile.vc to create a static library of libtiff Following these steps in Windows XP generates the .lib file but in Windows 7 it fails. This is the first time I'm making any .lib files.

    Read the article

  • /etc/crontab or any user crontab is not being executed

    - by ian
    My server is CentOS 5. When I edit /etc/crontab or edit any user(including root) crontab via "crontab -e" command, it just adds "(system) RELOAD (/etc/crontab)" or "(admin) RELOAD (cron/admin)" in the log. No CMD in the /var/log/cron. Sample entry in /var/log/cron: Aug 10 10:21:33 localhost crontab[31688]: (root) BEGIN EDIT (root) Aug 10 10:21:42 localhost crontab[31688]: (root) REPLACE (root) Aug 10 10:21:42 localhost crontab[31688]: (root) END EDIT (root) Aug 10 10:22:01 localhost crond[2688]: (root) RELOAD (cron/root) Result of "service crond status": crond (pid 1345) is running... The command "cat /var/log/messages | grep cron" does not give anything. Contents of /etc/cron.allow: admin root Contents of /etc/crontab: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly * * * * * root run-parts /bin/date >> /data/date.txt Result of ps aux |grep cron: root 1345 0.0 0.1 5268 1204 ? Ss 11:43 0:00 crond Contents of admin's crontab: * * * * * /bin/date >> /data/date.txt Note that it's not only admin's crontab that's not running. All cron jobs are not running. Any ideas why they aren't running?

    Read the article

  • WinPcap/Wireshark install: where is packet.ddl?

    - by Annonomus Penguin
    I have Wireshark installed, and I'm getting this error: The NPF driver isn't running. You may have trouble capturing or listing interfaces. I realize this is something to do with WinPcap. It's not in control panel, as the FAQ states it should be. I've tried installing it, and it says that there is a previous version installed. This leaves me to believe this is the problem: To be absolutely sure that WinPcap has been installed, please look at your system folder: you should find files called packet.* and wpcap.dll. Please check the file dates: these should be compatible with the WinPcap release dates. We've had reports of trojans or other malware that silently install the WinPcap driver, NPF.sys. If you've been infected by them, you'll probably see the driver file in Windows\System32\Drivers, but no entries in the "Add or Remove Programs" applet and no dlls. I've searched my hard drive, but the only path is this: C:\Windows\SysWOW64\packet.dll Is this the file they are talking about? Should I delete this file? I'm not quite sure, so I thought I'd verify that this file is the right one.

    Read the article

  • Difference between "traceroute" and "traceroute -U"

    - by AndiDog
    The manpage of traceroute says that the "-U" parameter (UDP probing) is the default, but I'm getting different results every time. With "-U": traceroute -U www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 13 rap-vl165-te3-2-jussieu-rtr-021.noc.renater.fr (193.51.181.101) 59.445 ms 56.924 ms 56.651 ms [...] 18 * paris1web.univ-paris1.fr (193.55.96.121) 23.797 ms 23.603 ms but the normal traceroute gives me another result (never reaches the final node) - it's either "!X" or just exits after the maximum of 30 hops: traceroute www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 11 te1-1-paris1-rtr-021.noc.renater.fr (193.51.189.38) 28.147 ms 28.250 ms 28.538 ms [... non-responding nodes ...] 28 site-1.03-jussieu.rap.prd.fr (195.221.126.58) 85.941 ms !X * * Note: I tried this very often and always get the same results. The path in my local network is always the same. So what does the "-U" parameter actually change here? I'm especially interested what the reason for "!X" could be (communication administratively prohibited). EDIT: If that helps, paris-traceroute gives me the following for the last hop: 14 P(1, 6) site-1.03-jussieu.rap.prd.fr (195.221.126.58) 34.938 ms !5 !T2 which means that node discards the packet with TTL=2 and returns an unknown message (not "destination unreachable" or the like).

    Read the article

  • Compile PHP 5.3.2 with intl extension on Snow Leopard 10.6.3

    - by fsb
    Does anyone have some tips on compiling PHP's intl extension on PHP? I'm getting compile errors each way I try it and I've been googling for ages and getting nowhere. Any help greatly appreciated. When make gets to the huge gcc command to compile libphp5.bundle, I get the following error: Undefined symbols: "___gxx_personality_v0", referenced from: icu_4_2::MessageFormatAdapter::getArgTypeList(icu_4_2::MessageFormat const&, int&)in msgformat_helpers.o _umsg_parse_helper in msgformat_helpers.o _umsg_format_arg_count in msgformat_helpers.o _umsg_format_helper in msgformat_helpers.o CIE in msgformat_helpers.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libs/libphp5.bundle] Error 1 My compile commands are: MACOSX_DEPLOYMENT_TARGET=10.6 CFLAGS="-arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch x86_64 -g -Os -pipe" CXXFLAGS="-arch x86_64 -g -Os -pipe" LDFLAGS="-arch x86_64 -bind_at_load" export CFLAGS CXXFLAGS LDFLAGS CCFLAGS MACOSX_DEPLOYMENT_TARGET ./configure --prefix=/usr \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --sysconfdir=/private/etc \ --with-apxs2=/usr/sbin/apxs \ --enable-cli \ --with-config-file-path=/etc \ --with-libxml-dir=/usr \ --with-openssl=/usr \ --with-zlib=/usr \ --with-bz2=/usr \ --with-curl=/usr \ --with-gd \ --with-jpeg-dir=/src/jpeg/jpeg-local \ --with-png-dir=/usr/X11R6 \ --with-freetype-dir=/usr/X11R6 \ --with-xpm-dir=/usr/X11R6 \ --with-ldap=/usr \ --with-ldap-sasl=/usr \ --enable-mbstring \ --enable-mbregex \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-pdo-mysql=mysqlnd \ --with-mysql-sock=/var/mysql/mysql.sock \ --with-iodbc=/usr \ --enable-shmop \ --with-snmp=/usr \ --enable-soap \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --with-xmlrpc \ --with-iconv-dir=/usr \ --with-xsl=/usr \ --with-pcre-regex=/src/pcre/pcre-local/usr/local \ --with-pcre-dir=/src/pcre/pcre-local/usr/local \ --with-icu-dir=/usr/local \ --enable-intl export EXTRA_CFLAGS="-lresolv" make

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by Develop7
    Intro I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 Problem But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator screams "Logon failure" upon connect attempt, and, respectively, fails to connect. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d Question So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by develop7
    Preface: I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator tells "Logon failure" upon connect attempt. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

  • Start and shutdown tomcat via ssh

    - by bxshi
    Updated I find out that the path of jdk it using is wrong. eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found the Java should be lower case java, how is that happen? When I run this script directly on server, it is just okay. I'm trying to start or shutdown tomcat via a remote client. On my server, I've got 3 different tomcat: tomcat1, tomcat2, and tomcat3. Firstly, I've tried to run tomcat_path/bin/shutdown.sh to stop it via ssh, and the command is ssh [email protected] "cd /home/jake/tomcat2/bin;exec bash ./shutdown.sh" both " and ' are tried, but do not work, the output is eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found it seems that the shell script runs on my local client, because on server it has this file. Is there any way to run a shell script on remote server correctly? updated I've run ssh [email protected] "sh -x /home/jake/tomcat/bin/shutdown.sh > /home/jake/tomcat.log 2>&1" and the output in tomcat.log is : + PRG=/home/jake/tomcat/bin/shutdown.sh + [ -h /home/jake/tomcat/bin/shutdown.sh ] + dirname /home/jake/tomcat/bin/shutdown.sh + PRGDIR=/home/jake/tomcat/bin + EXECUTABLE=catalina.sh + [ ! -x /home/jake/tomcat/bin/catalina.sh ] + exec /home/jake/tomcat/bin/catalina.sh stop eval: 1: /opt/Java/jdk1.6.0_25/jre/bin/java: not found

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

  • Batch copy gives errors, xcopy works fine

    - by ndm13
    I am writing a general file backup program. It searches the drive for files matching a set of types and then writes them to a folder on the desktop. I wrote it using xcopy on Windows XP but upon learning that xcopy was deprecated in favor of robocopy in Vista and newer, still wanting to maintain compatibility I decided to switch to the non-deprecated copy. This is where the problems begin. I'm trying to fix the copy routine. I thought I had everything sorted out, but it doesn't copy anything. My output is zero files copied for every iteration. Original Code using xcopy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( echo f | xcopy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /q /y /g /c ) Revised (broken) Code using copy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( copy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /d /y /z ) Output: The system cannot find the path specified. 0 files copied. I know that it seems everyone uses either xcopy or robocopy but can anyone help with copy? Note: I'm using Batch to keep it very lightweight and command-line accessible.

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Very poor SCSI hd performance on IBM x336 with LSI 1030 RAID1

    - by David Tschoepe
    I'm experiencing very poor performance on an IBM x336 server with dual 73GB 15k hard drives on a U320 controller, LSI 1030. We're getting maybe 3.5MB/sec max (per HD Tune utility). It should be over 100MB/sec at least, I would think (another x335 box is running 70-80MB/sec). The server was recently setup and didn't really notice the problem, but may have been there from the beginning, so not sure. I have installed the IBM ServerRAID Windows utility. The server is running Windows 2008 R2 Web edition (if that matters). I thought maybe one of the drives was bad, so far I have removed one of the drives out of the array and tested again, but still the same results. I'm waiting for the RAID1 to resync and I will try pulling the other drive next. I've also used the ServerRAID utility but haven't noticed anything in there that might indicate a problem. Not sure if I'm on the right path here. So looking for some advice to track this down.

    Read the article

  • Failed to generate a user instance of SQL Server

    - by Goondocks
    I'm using Windows 7 Beta and trying to install a web application locally. This web site uses Microsoft SQL Server 2005 Express (SQLEXPRESS) and a MDB file in the web site's ~/App_Data folder. I was instructed to configure IIS7 to use Classic .NET AppPool for this web application. Each time the web site loads, I receive the following error: There was an error trying to connect to the Database Server: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. The Internet is packed with articles written on this subject. The prevailing wisdom seems to be: Configure the SQL Express Service to use the Local System account. Delete the following directory: C:\Users\username\AppData\Microsoft\Microsoft SQL Server Data\SQLEXPRESS Neither of these fixes have made any impact. I have tinkered with permissions and settings for hours to no avail. Can anyone suggest a fix or help me understand how to get more detailed information about the problem.

    Read the article

  • IIS7 - Virtual Directories' Parent Paths behaving differently than previous versions

    - by MisterZimbu
    I'm doing a migration of a web server running on IIS 5 to IIS 7. I'm noticing that the virtual directories are behaving differently between the two. I have a site located at c:\inetpub\SiteName. This site contains a virtual directory "bob" that points at c:\virtualdirs\bob. There's a script in the bob folder (script.asp) that contains just: <!--#include virtual="../index.asp"--> I'm noticing different behaviors between IIS5 and IIS7 when I attempt to run the script by going to http://SiteName/bob/script.asp: IIS5 references the parent path of the site, and imports c:\inetpub\SiteName\index.asp. IIS7 references the parent folder of the virtual directory, and looks for a c:\virtualdirs\index.asp (that doesn't exist). Doing a Response.Write of a Server.MapPath confirms this. Is there a way to get IIS7 to behave like IIS5 in this regard? Unfortunately, moving index.asp and its logic into the virtualdirs folder isn't an option as the virtual directory will be shared across many sites (with differing index.asps). Thanks.

    Read the article

  • Dual pane file manager for Mac OS

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

  • Dual pane file manager for Mac OS

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >