Search Results

Search found 23949 results on 958 pages for 'test'.

Page 220/958 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • MS SQL to MySQL using MySQL Migration Toolkit: permission issue

    - by Zeno
    I have a MS SQL imported into SQL Server 2008 from a .bak and I set it to Mixed mode. I have a SQL user (called "test") that can correctly access the database using SQL Server. I need to convert this to a MySQL database, so I got the MySQL Migration Toolkit. I pick "MS SQL Server" and then it asks for the hostname/username/password/database. I'm not 100% sure on these, but I used "localhost" (running on same computer), left the port as is (1433) and the username/password ("test") for the SQL Server. And I used the database name for the SQL Server database I'm looking to import. I clicked next, enter my MySQL database details and then attempt to run it and I get this error: Connecting to source database and retrieve schemata names. Initializing JDBC driver ... Driver class MS SQL JDBC Driver Opening connection ... Connection jdbc:jtds:sqlserver://localhost:1433/Orders;user=test;password=blah;charset=utf-8;domain= The list of schema names could not be retrieved (error: 0). ReverseEngineeringMssql.getSchemata :Network error IOException: Connection refused: connect Details: net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:372) net.sourceforge.jtds.jdbc.ConnectionJDBC3.<init>(ConnectionJDBC3.java:50) net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:178) java.sql.DriverManager.getConnection(Unknown Source) java.sql.DriverManager.getConnection(Unknown Source) com.mysql.grt.modules.ReverseEngineeringGeneric.establishConnection(ReverseEngineeringGeneric.java:141) com.mysql.grt.modules.ReverseEngineeringMssql.getSchemata(ReverseEngineeringMssql.java:99) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) com.mysql.grt.Grt.callModuleFunction(Unknown Source)

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • How do I configure PHP5 and Apache2 on Ubuntu Server?

    - by rofls
    I'm trying to follow these instructions (under the Troubleshooting PHP 5 heading). I have PHP installed and when when I run a2enmod php5 it says "Module php5 already enabled". The problem is I created a file, test.php, that's just this: <?php phpinfo(); ?> and put it in /var/www, like the instructions tell me to, but running curl http://localhost/test.php produces an Apache made 404 that says it can't find that file. I have: ServerName localhost DocumentRoot /var/www in one of the sites-available in the /etc/apache2 directory. I should probably figure this out on my own, but the instructions say for troubleshooting do: "If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user "apache"), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by "" (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn't work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted." And I don't know where the PHP file authorisations are or how to do that.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

  • How to create custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

  • Disk IO slow on ESXi, even slower on a VM (freeNAS + iSCSI)

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas 8.0.4). Those two machines are connected to each other with Gigabit ethernet. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. I ssh'd into the freeNAS box, and did some testing on the disks. I used ddto test the third part of the disks (straight on top of ZFS). I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. Other of the iSCSI shared zvols is a datastore for the ESXi. I did similar test with time dd .. there. Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. The result was around 30-40 MB/s. Thats about half of the speed from the freeNAS host! Then I tested the IO on a VM running on the same ESXi host. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. A similar dd test gave me result of about 15-20 MB/s. That is again about half of the result on a lower level! Of course the is some overhead in raid-z - zfs - zvolume - iSCSI - VMFS - VM, but I don't expect it to be that big. I belive there must be something wrong in my system. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler). Can you see any obvious problems with my setup?

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Recording screen-casts on another computer

    - by paleozogt
    We're trying to record the desktops of users using demo versions of our software (this is an in-house lab setup). We need to have the recording happen on a separate computer (just across the room), so that the recording software doesn't interfere with the user. Every screen recording software I've seen will only record what's happening on the computer its installed on; ie, you can't record what's happening on another computer. So it seems I need to cobble together a solution (unless anyone knows of software that will do this). Getting the video to the other computer seems easy enough. I'm using TightVNC with the DFMirage driver on the test computer. The recording computer connects to the test computer with TightVNC and then uses CamStudio to record what's happening. The real problem is how to deal with the audio. We need to record both what the user is saying (through a headset mic) as well as the sounds produced by the test computer. But VNC doesn't transmit audio. :( I'm not sure how to get both audio streams (mic and sounds) over to the recording computer. Any ideas?

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >