Search Results

Search found 2488 results on 100 pages for 'listen'.

Page 26/100 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • exim4 seem to stop listening

    - by trakos
    Hey, I have a strange problem with my exim4 configuration. I have a dedicated server running debian for quite a long time now, but I'm not really using it actively recently, so everything just worked due to lack of changes ;) However, recently, my exim4 smtp stopped answering on port 25. It does not respond through localhost, as well - even though it's set to listen on any interface available. Some things I've checked: ks:/home/trakos/Maildir/new# netstat -ap | grep exim tcp 0 0 *:smtp : LISTEN 12521/exim4 ks:/home/trakos/Maildir/new# exiwhat 12521 daemon: -q30s, listening for SMTP on port 25 (IPv4) ks:/home/trakos/Maildir/new# cat /var/log/exim4/rejectlog ks:/home/trakos/Maildir/new# cat /var/log/exim4/paniclog The queue is set for 30s only because I was running it in a non-daemon mode to see any output. Strangely enough, no suspicious output is given, netstat even shows it is listening on port 25, but still trying to telnet to it times out. The only things that may have changed recently are: I've got second IP for my server I remember that few days ago my spamassasin crashed, and I've started it up again So yeah, I'm really clueless about this one now :P I mean, I don't even know what could be failing here. Could someone give me some ideas what should I check next? PS: it has uptime of 442 days, so I haven't really tried rebooting it yet ^^

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • How to setup apache to catch a proxy_pass from nginx?

    - by Paté
    I have a working apache vhost such as <VirtualHost localhost:10006> DocumentRoot "/home/pate/***/git/kohana_site/public/site/" </VirtualHost> <VirtualHost *:10006> ServerName api.* DocumentRoot "/home/pate/***/git/kohana_site/public/api/" LogLevel debug </VirtualHost> If i point to localhost:10006 I get my website and api.localhost:10006 I get my api. Then I have haproxy setup on top of that, that runs on port 10010 and both localhost:10010 and api.localhost:10010 have the expected behaviour. Now I have nginx setup on port 80 with this configuration. server { listen 10000; server_name api.*; location / { proxy_pass http://legacy_server; } } server { listen 10000 default; server_name _; location /nginx_status { stub_status on; access_log off; } # images are accessed via the CDN over HTTP (not https) location /n/image { proxy_pass http://image_caching_server; } location / { return 301 https://$host:10014$request_uri; } } upstream legacy_server { server localhost:10010 fail_timeout=0; } the problem is that apache does not recognize the vhost properly and redirects api.localhost to the website instead of the api. I tried playing with set_proxy_header Host $host but it doesn't seem to do anything.

    Read the article

  • Nginx redirect all request that does not match a file to a php file

    - by cyrbil
    I'm trying to get all request to: http://mydomain.com/downloads/* redirect to http://mydomain.com/downloads/index.php except if the requested file exist in /downloads/ ex: http://mydomain.com/downloads = /downloads/index.php http://mydomain.com/downloads/unknowfile = /downloads/index.php http://mydomain.com/downloads/existingfile = /downloads/existingfile My current problem is I have either the redirection to php working but static files not served or the opposite. Here is my current vhost conf: (which redirect fine but static files are send to php and fail) server { listen 80; ## listen for ipv4; this line is default and implied server_name domain.com; root /data/www; index index.php index.html; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ^~ /downloads { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; try_files $uri @downloads; } location @downloads { rewrite ^ /downloads/index.php; } # pass the PHP scripts to FastCGI server # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } Precision: static files are symlinks created by /downloads/index.php Thank you for your help.

    Read the article

  • Nginx: Rewriting directory path to file

    - by Doug
    I'm a little new to Nginx here so bear with me - I want to rewrite a url like foo.bar.com/newfoo?limit=30 to foo.bar.com/newfoo.php?limit=30. Seems pretty simple to do it something like this rewrite ^([a-z]+)(.*)$ $1.php$2 last; The part that I am confused about is where to put it - I've tried my hand at a some location directives but I'm doing it wrong. Here's my existing virtual host config, where should I implement my rewrite? server { listen 80; listen [::]:80; server_name foo.bar.com; root /home/foo; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } } Thanks!

    Read the article

  • How should I configure my Apache Hosts File to serve a different site for localhost than for my domain/publicip?

    - by rofls
    I'm trying to test out a LAMP (with PHP5 specifically) setup with Django already serving a website. I want to do the PHP stuff on localhost for now, so that when I do something like this: curl http://localhost/database/script.php?var=1, I get a response from the php server. Right now I'm getting a Django error. I tried something like this in the default file in sites-available: Listen 80 <VirtualHost aaa.bbb.ccc.ddd> ServerName localhost DocumentRoot /home/phpsite </VirtualHost> where aaa.bbb.ccc.ddd is the local ip address, and changing my actual site's settings to specify the public ip, like this: Listen 80 <VirtualHost www.xxx.yyy.zzz> ServerName mysite.com DocumentRoot /srv/www/mysite WSGIScriptAlias / /srv/www/mysite.wsgi </VirtualHost> but then I start getting all kinds of errors when I start apache, such as port ::[80] is already in use or something. I noticed that the hosts file that's located in /etc/apache2/ is apparently pointing everything to mysite.com, including my local ip as well as 127.0.0.1 and 127.0.1.1; Do I need to change the configuration there too?

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • nginx configuration file explained

    - by Chris Muench
    I have a few questions about this configuration file "default" in /etc/nginx/sites-enabled. It is shown below. server { root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.1:8080; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex off; } } There is no "Listen" directive, how does it know to default to 80 The server_name is localhost, how does another domain work? Why is the location directive embedded in the server directive? Does that mean these locations ONLY apply to this server? None of my configs have listen 80 default_server; how does nginx then pick what configuration to use?

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Nginx virtual server only partially working with Java application

    - by MFB
    Final Cut Server is a Java application (made by Apple) which launches from a web page. I have Nginx in front of this web server (amongst others) and, whilst the web server can be browsed externally, the Java app fails to launch correctly and throws the errors below. Can anyone offer clues as to what additional config I many have to add to Nginx to get this working? My existing Nginx config: user xxxx; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { server_name _; return 444; } upstream fcs-site { server 10.10.5.20:8080; } server { listen 80; server_name example.com 10.10.5.90; access_log /var/log/nginx/fcs_access.log; error_log /var/log/nginx/fcs_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://fcs-site; proxy_redirect off; } } upstream myapp-site { server 127.0.0.1:6543; } server { listen 80; server_name otherexample.com www.otherexample.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/ssl/otherapp.crt; ssl_certificate_key /etc/ssl/otherapp.key; server_name otherexample.com www.otherexample.com; access_log /var/log/nginx/otherapp_access.log; error_log /var/log/nginx/other_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://myapp-site; proxy_redirect off; } location /static { root /www; expires 30d; add_header Cache-Control public; access_log off; } } Java errors: com.sun.deploy.net.FailedDownloadException: Unable to load resource: http://example.com:8080/FinalCutServer/FinalCutServer_mac.jnlp at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722) java.net.ConnectException: Operation timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) at sun.net.www.http.HttpClient.New(HttpClient.java:290) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doGetRequest(Unknown Source) at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Metro: Query Selectors

    - by Stephen.Walther
    The goal of this blog entry is to explain how to perform queries using selectors when using the WinJS library. In particular, you learn how to use the WinJS.Utilities.query() method and the QueryCollection class to retrieve and modify the elements of an HTML document. Introduction to Selectors When you are building a Web application, you need some way of easily retrieving elements from an HTML document. For example, you might want to retrieve all of the input elements which have a certain class. Or, you might want to retrieve the one and only element with an id of favoriteColor. The standard way of retrieving elements from an HTML document is by using a selector. Anyone who has ever created a Cascading Style Sheet has already used selectors. You use selectors in Cascading Style Sheets to apply formatting rules to elements in a document. For example, the following Cascading Style Sheet rule changes the background color of every INPUT element with a class of .required in a document to the color red: input.red { background-color: red } The “input.red” part is the selector which matches all INPUT elements with a class of red. The W3C standard for selectors (technically, their recommendation) is entitled “Selectors Level 3” and the standard is located here: http://www.w3.org/TR/css3-selectors/ Selectors are not only useful for adding formatting to the elements of a document. Selectors are also useful when you need to apply behavior to the elements of a document. For example, you might want to select a particular BUTTON element with a selector and add a click handler to the element so that something happens whenever you click the button. Selectors are not specific to Cascading Style Sheets. You can use selectors in your JavaScript code to retrieve elements from an HTML document. jQuery is famous for its support for selectors. Using jQuery, you can use a selector to retrieve matching elements from a document and modify the elements. The WinJS library enables you to perform the same types of queries as jQuery using the W3C selector syntax. Performing Queries with the WinJS.Utilities.query() Method When using the WinJS library, you perform a query using a selector by using the WinJS.Utilities.query() method.  The following HTML document contains a BUTTON and a DIV element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <button>Click Me!</button> <div style="display:none"> <h1>Secret Message</h1> </div> </body> </html> The document contains a reference to the following JavaScript file named \js\default.js: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.Utilities.query("button").listen("click", function () { WinJS.Utilities.query("div").clearStyle("display"); }); } }; app.start(); })(); The default.js script uses the WinJS.Utilities.query() method to retrieve all of the BUTTON elements in the page. The listen() method is used to wire an event handler to the BUTTON click event. When you click the BUTTON, the secret message contained in the hidden DIV element is displayed. The clearStyle() method is used to remove the display:none style attribute from the DIV element. Under the covers, the WinJS.Utilities.query() method uses the standard querySelectorAll() method. This means that you can use any selector which is compatible with the querySelectorAll() method when using the WinJS.Utilities.query() method. The querySelectorAll() method is defined in the W3C Selectors API Level 1 standard located here: http://www.w3.org/TR/selectors-api/ Unlike the querySelectorAll() method, the WinJS.Utilities.query() method returns a QueryCollection. We talk about the methods of the QueryCollection class below. Retrieving a Single Element with the WinJS.Utilities.id() Method If you want to retrieve a single element from a document, instead of matching a set of elements, then you can use the WinJS.Utilities.id() method. For example, the following line of code changes the background color of an element to the color red: WinJS.Utilities.id("message").setStyle("background-color", "red"); The statement above matches the one and only element with an Id of message. For example, the statement matches the following DIV element: <div id="message">Hello!</div> Notice that you do not use a hash when matching a single element with the WinJS.Utilities.id() method. You would need to use a hash when using the WinJS.Utilities.query() method to do the same thing like this: WinJS.Utilities.query("#message").setStyle("background-color", "red"); Under the covers, the WinJS.Utilities.id() method calls the standard document.getElementById() method. The WinJS.Utilities.id() method returns the result as a QueryCollection. If no element matches the identifier passed to WinJS.Utilities.id() then you do not get an error. Instead, you get a QueryCollection with no elements (length=0). Using the WinJS.Utilities.children() method The WinJS.Utilities.children() method enables you to retrieve a QueryCollection which contains all of the children of a DOM element. For example, imagine that you have a DIV element which contains children DIV elements like this: <div id="discussContainer"> <div>Message 1</div> <div>Message 2</div> <div>Message 3</div> </div> You can use the following code to add borders around all of the child DIV element and not the container DIV element: var discussContainer = WinJS.Utilities.id("discussContainer").get(0); WinJS.Utilities.children(discussContainer).setStyle("border", "2px dashed red");   It is important to understand that the WinJS.Utilities.children() method only works with a DOM element and not a QueryCollection. Notice that the get() method is used to retrieve the DOM element which represents the discussContainer. Working with the QueryCollection Class Both the WinJS.Utilities.query() method and the WinJS.Utilities.id() method return an instance of the QueryCollection class. The QueryCollection class derives from the base JavaScript Array class and adds several useful methods for working with HTML elements: addClass(name) – Adds a class to every element in the QueryCollection. clearStyle(name) – Removes a style from every element in the QueryCollection. conrols(ctor, options) – Enables you to create controls. get(index) – Retrieves the element from the QueryCollection at the specified index. getAttribute(name) – Retrieves the value of an attribute for the first element in the QueryCollection. hasClass(name) – Returns true if the first element in the QueryCollection has a certain class. include(items) – Includes a collection of items in the QueryCollection. listen(eventType, listener, capture) – Adds an event listener to every element in the QueryCollection. query(query) – Performs an additional query on the QueryCollection and returns a new QueryCollection. removeClass(name) – Removes a class from the every element in the QueryCollection. removeEventListener(eventType, listener, capture) – Removes an event listener from every element in the QueryCollection. setAttribute(name, value) – Adds an attribute to every element in the QueryCollection. setStyle(name, value) – Adds a style attribute to every element in the QueryCollection. template(templateElement, data, renderDonePromiseContract) – Renders a template using the supplied data.  toggleClass(name) – Toggles the specified class for every element in the QueryCollection. Because the QueryCollection class derives from the base Array class, it also contains all of the standard Array methods like forEach() and slice(). Summary In this blog post, I’ve described how you can perform queries using selectors within a Windows Metro Style application written with JavaScript. You learned how to return an instance of the QueryCollection class by using the WinJS.Utilities.query(), WinJS.Utilities.id(), and WinJS.Utilities.children() methods. You also learned about the methods of the QueryCollection class.

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • OK - What now? How do we become a Social Business?

    - by Michael Snow
    We hope that those of you that attended yesterday's Webcast with Brian Solis enjoyed Brian's discussion with Christian Finn for our last Webcast of the season for the Oracle Social Business Thought Leaders Series.  For those of you that may have missed the webcast or were stuck at a company holiday party - you'll be glad to hear that the webcast will be available On-Demand starting later today (12/14/12). And any of you who'd like to listen to a quick but informative podcast with Brian - can listen to that here. Some of you may still be left with questions about how to get from point A to point B and even more confused than when you started thinking about this new world of Digital Darwinism. The post below, grabbed from an abundance of great thought leadership prose on Brian's blog may help you frame the path you need to start walking sooner versus later to stay off of the endangered species list.  As you explore your path forward, please keep Oracle in mind - we do offer a wide range of solutions to help your organization 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} optimize the engagement for your customers, employees and partners. The Path from a Social Brand to a Social Business Brian Solis Originally posted May 2, 2012 I’ve been a long-time supporter of MediaTemple’s (MT)Residence program along with Gary Vaynerchuk, Neil Patel, and many others whom I respect. I wanted to share my “7 questions to answer to become a social business” with you here.. Social Media is pervasive and is becoming the new normal in corporate marketing. Brands who get this right are starting to build their own media networks rich with customer connections numbering in the millions. Right now, Coca-Cola has over 34 million fans on Facebook, but they’re hardly alone. Disney follows just behind with 29 million fans, Starbucks boasts 25 million, and Oreo, Red Bull, and Converse play host to over 20 million fans. If we were to look at other networks such as Twitter and Youtube, we would see a recurring theme. People are connecting en masse with the businesses they support and new media represents the ability to cultivate consumer relationships in ways not possible with traditional earned or paid media. Sounds great right? This might sound abrupt, but the truth is that we’re hardly realizing the potential of what lies before us. Everything begins with understanding not just how other brands are marketing themselves in social media, but also seeing what they’re not doing and envisioning what’s possible. We’re already approaching the first of many crossroads that new media will present. Do we take the path of a social brand or that of a social business? What’s the difference? A social brand is just that, a business that is remodeling or retrofitting its existing marketing practices to new media. A social business is something altogether different as it embraces introspection and extrospection to reevaluate internal and external processes, systems, and opportunities to transform into a living, breathing entity that adapts to market conditions and opportunities. It’s a tough decision to make right now especially at a time when all we read about is how much success many businesses are finding without having to answer this very question. With all of the newfound success in social networks, the truth is that we’re only just beginning to learn what’s possible and that’s where you come in. When compared to the investment in time and resources across the board, social media represents only a small part of the mix. But with your help, that’s all about to change. The CMO Survey, an organization that disseminates the opinions of top marketers in order to predict the future of markets, recently published a report that gave credence to the fact that social media is taking off. One of the most profound takeaways from the report was this gem; “The “like button” [in Facebook] packs more customer-acquisition punch than other demand-generating activities.” With insights like this, it’s easy to see why the race to social is becoming heated. The report also highlighted exactly where social fits in the marketing mix today and as you can see, despite all of the hype, it’s not a dominant focus yet. As of August 2011, the percentage of overall marketing budgets dedicated to social media hovered at around 7%. However, in 2012 the investment in social media will climb to 10%. And, in five years, social media is expected to represent almost 18% of the total marketing budget. Think about that for a moment. In 2016, social media will only represent 18%? Queue the sound of a record scratching here. With businesses finding success in social networks, why are businesses failing to realize the true opportunity brought forth by the ability to listen to, connect with, and engage with customers? While there’s value in earning views, driving traffic, and building connections through the 3F’s (friends, fans and followers), success isn’t just defined simply by what really amounts to low-hanging fruit. The truth is that businesses cannot measure what it is they don’t know to value. As a result, innovation in new engagement initiatives is stifled because we’re applying dated or inflexible frameworks to new paradigms. Social media isn’t owned by marketing, but instead the entire organization. This changes everything and makes your role so much more important. It’s up to you to learn how to think outside of the proverbial social media box to see what others don’t, the ability to improve customers experiences through the evolution of a social brand into a social business. Doing so will translate customer insights from what they do and don’t share in social networks into better products, services, and processes. See, customers want something more from their favorite businesses than creative campaigns, viral content, and everyday dialogue in social networks. Customers want to be heard and they want to know that you’re listening. How businesses use social media must remind them that they’re more than just an audience, consumer, or a conduit to “trigger” a desired social effect. Herein lies both the challenge and opportunity of social media. It’s bigger than marketing. It’s also bigger than customer service. It’s about building relationships with customers that improve experiences and more importantly, teaches businesses how to re-imagine products and internal processes to better adapt to potential crises and seize new opportunities. When it comes down to it, Twitter, Facebook, Youtube, Foursquare, are all channels for listening, learning, and engaging. It’s what you do within each channel that builds a community around your brand. And, at the end of the day, the value of the community you build counts for everything. It’s important to understand that we cannot assume that these networks simply exist for people to lineup for our marketing messages or promotional campaigns. Nor can we assume that they’re reeling in anticipation for simple dialogue. They want value. They want recognition. They want access to exclusive information and offers. They need direction, answers and resolution. What we’re talking about here is the multidimensional makeup of consumers and how a one-sided approach to social media forces the needs for social media to expand beyond traditional marketing to socialize the various departments, lines of business, and functions to engage based on the nature of the situation or opportunity. In the same CMO study, it was revealed that marketers believe that social media has a long way to go toward integrating into the overall company strategy. On a scale of 1-7, with one being “not integrated at all” and seven being “very integrated,” 22% chose “one.” Critical functions such as service, HR, sales, R&D, product marketing and development, IR, CSR, etc. are either not engaged or are operating social media within a silo disconnected from other efforts or possibilities. The problem is that customers don’t view a company by silo, instead they see one company, one brand, and their experience in social media forms an impression that eventually contributes to their view of your brand. The first step here is to understand business priorities and objectives to assess how social media can be additive in achieving these goals. Additionally, surveying the landscape to determine other areas of interest as its specifically related to your business. • Are customers seeking help or direction? • Who are your most valuable customers and what are they sharing? • How can you use social media to acquire and retain customers? - What ideas are circulating and how can you harness user generated activity and content to innovate or adapt to better meet the needs of customers? - How can you broaden a single customer view to recognize the varying needs of customers and how your organization can organize around each circumstance? - What insights exist based on how consumers are interacting with one another? How can this intelligence inform marketing, service, products and other important business initiatives? - How can your business extend their current efforts to deliver better customer experiences and in turn more effectively unit internal collaboration and communication? Customer demands far exceed the capabilities of the marketing department. While creating a social brand is a necessary endeavor, building a social business is an investment in customer relevance now and over time. Beyond relevance, a social business fosters a culture of change that unites employees and customers and sets a foundation for meaningful and beneficial relationships. Innovation, communication, and creativity are the natural byproducts of engagement and transformation. As a social brand, we are competing for the moment. As a social business, we are competing the future in all that we do today.

    Read the article

  • Reading off a socket until end of line C#?

    - by Omar Kooheji
    I'm trying to write a service that listens to a TCP Socket on a given port until an end of line is recived and then based on the "line" that was received executes a command. I've followed a basic socket programming tutorial for c# and have come up with the following code to listen to a socket: public void StartListening() { _log.Debug("Creating Maing TCP Listen Socket"); _mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, _port); _log.Debug("Binding to local IP Address"); _mainSocket.Bind(ipLocal); _log.DebugFormat("Listening to port {0}",_port); _mainSocket.Listen(10); _log.Debug("Creating Asynchronous callback for client connections"); _mainSocket.BeginAccept(new AsyncCallback(OnClientConnect), null); } public void OnClientConnect(IAsyncResult asyn) { try { _log.Debug("OnClientConnect Creating worker socket"); Socket workerSocket = _mainSocket.EndAccept(asyn); _log.Debug("Adding worker socket to list"); _workerSockets.Add(workerSocket); _log.Debug("Waiting For Data"); WaitForData(workerSocket); _log.DebugFormat("Clients Connected [{0}]", _workerSockets.Count); _mainSocket.BeginAccept(new AsyncCallback(OnClientConnect), null); } catch (ObjectDisposedException) { _log.Error("OnClientConnection: Socket has been closed\n"); } catch (SocketException se) { _log.Error("Socket Exception", se); } } public class SocketPacket { private System.Net.Sockets.Socket _currentSocket; public System.Net.Sockets.Socket CurrentSocket { get { return _currentSocket; } set { _currentSocket = value; } } private byte[] _dataBuffer = new byte[1]; public byte[] DataBuffer { get { return _dataBuffer; } set { _dataBuffer = value; } } } private void WaitForData(Socket workerSocket) { _log.Debug("Entering WaitForData"); try { lock (this) { if (_workerCallback == null) { _log.Debug("Initializing worker callback to OnDataRecieved"); _workerCallback = new AsyncCallback(OnDataRecieved); } } SocketPacket socketPacket = new SocketPacket(); socketPacket.CurrentSocket = workerSocket; workerSocket.BeginReceive(socketPacket.DataBuffer, 0, socketPacket.DataBuffer.Length, SocketFlags.None, _workerCallback, socketPacket); } catch (SocketException se) { _log.Error("Socket Exception", se); } } public void OnDataRecieved(IAsyncResult asyn) { SocketPacket socketData = (SocketPacket)asyn.AsyncState; try { int iRx = socketData.CurrentSocket.EndReceive(asyn); char[] chars = new char[iRx + 1]; _log.DebugFormat("Created Char array to hold incomming data. [{0}]",iRx+1); System.Text.Decoder decoder = System.Text.Encoding.UTF8.GetDecoder(); int charLength = decoder.GetChars(socketData.DataBuffer, 0, iRx, chars, 0); _log.DebugFormat("Read [{0}] characters",charLength); String data = new String(chars); _log.DebugFormat("Read in String \"{0}\"",data); WaitForData(socketData.CurrentSocket); } catch (ObjectDisposedException) { _log.Error("OnDataReceived: Socket has been closed. Removing Socket"); _workerSockets.Remove(socketData.CurrentSocket); } catch (SocketException se) { _log.Error("SocketException:",se); _workerSockets.Remove(socketData.CurrentSocket); } } This I thought was going to be a good basis for what I wanted to do, but the code I have appended the incoming characters to a text box one by one and didn't do anything with it. Which doesn't really work for what I want to do. My main issue is the decoupling of the OnDataReceived method from the Wait for data method. which means I'm having issues building a string (I would use a string builder but I can accept multiple connections so that doesn't really work. Ideally I'd like to look while listening to a socket until I see and end of line character and then call a method with the resulting string as a parameter. What's the best way to go about doing this.

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • Why is my emit not getting called?

    - by cRaZiRiCaN
    The client and server connect just fine. For some reason the emit on my client is not firing correctly. I am trying to get the testEmit and testEmit2 working. This is my server: express = require 'express' mongo = require 'mongodb' app = express() server = (require 'http').createServer(app) io = (require 'socket.io').listen(server) server.listen(8080) app.use(express.static(__dirname + '/public')) # db = new mongo.Db("documentsdb", new mongo.Server("localhost", 27017, auto_reconnect: true), {safe:true}) io.sockets.on 'connection', (socket) -> console.log 'Socket.io is connected!' #This returns an array of documents sorted via date by decreasing order. (Most recent documents first.) socket.on 'loadRecentDocuments', -> console.log 'Loading most recent documents.' db.collection 'documents', (err, collection) -> collection.find().sort(dateAdded: -1).toArray (err, documents) -> #This emit is recieved at index.html where a javascript function sendDocuments manages the documents. socket.emit 'sendDocuments', documents return #The index.html provides the code data from the search box via a javascript. io.sockets.on 'findDocuments', (code) -> #Returns an array of documents with the corresponding class code. documentCodeToSearch = code console.log 'Retreaving documents with code: ' + documentCodeToSearch db.collection 'documents', (err, collection) -> collection.find(code:documentCodeToSearch).toArray (err, documents) -> socket.emit 'sendDocuments', documents return #Uploads a document to the server. documentData is sent via javascript from submit.html io.sockets.on 'addDocument', (documentData) -> console.log 'Adding document: ' + documentData db.collection 'documents', (err, collection) -> collection.insert documentData, safe: true return #Test socket.io io.sockets.on 'testEmit', -> console.log('Emit recieved.') socket.emit 'testEmit2', 'caca' return app.listen 1337 console.log "Listening on port 1337..." This is my client: <!doctype HTML> <html> <head> <title>ProjectShare</title> <script src="http://localhost:8080/socket.io/socket.io.js"></script> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script> //Make sure DOM is ready before mucking around. $(document).ready(function() { console.log('jQuery entered!'); var socket = io.connect('http://localhost:8080'); socket.emit('testEmit'); socket.on('testEmit2', function(data) { console.log('Emit recieved at browser.'); console.log(data); }); console.log('jQuery exit.'); }); </script> </head> <body> <ol> <li><a href="index.html">ProjectShare</a></li> <li><a href="guidelines.html">Guidelines</a></li> <li><a href="upload.html">Upload</a></li> <li> <form> <input type = "search" placeholder = "enter class code"/> <input type = "submit" value = "Go"/> </form> </li> </ol> <ol id = "documentList"> </ol> </body> </html>

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >