Search Results

Search found 14464 results on 579 pages for 'del icio us'.

Page 31/579 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Can I add a second mobile/cell network to my Blackberry Express Server?

    - by cagcowboy
    We have recently installed Blackberry Express Server for our UK Blackberry users (who are on Vodafone, not sure if the network matters) Our US users have got wind of this and are hoping we can do the same for them. (They're mostly on AT&T) The UK & US Exchange mailboxes are based on 2 different exchange servers So... Q. Is it possible to support the US users with the current installation/server configuration?

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Fixing Broken Groups

    - by themaestro
    Hey, I just got onto a new project with the student government at my University and we're trying to get our webserver into a more workable state. The current problem is that all of us for some reason have sudo power on the server, but we can't write/create files anywhere on the server (as far as we can tell) currently. Our groups are currently as follows: /srv/ice/db$ groups goshri sshamim rmenezes goshri : goshri sshamim : sshamim ptx rmenezes : rmenezes ptx daifotis : daifotis ptx We added a few of us to ptx because we thought that might give us write access but it didn't. We have a bunch of webapps running on this server but since it's university things change hands quickly. What can we do to give us read access?

    Read the article

  • How Do You Get Answers?

    - by Grant Fritchey
    We all learn differently. Some people really prefer to sit with a book. Others need class room time, with an instructor. Still others just want to do things themselves until they figure them out. And sometimes, it's all of the above. Since we all learn differently, is it any surprise that we ask questions differently? Some of us are going to want to phrase a very focused question that should yield a limited and concise answer. Others are going to want to have a long involved discussion with possible variations. There are those among us who will want to know whether or not their peers agree with a given answer. Still more are going to want to see lots of points of view dished out so they can understand things in a different way. Many of us want to know when a question has been asked that there is a hard, well defined answer. The rest are willing to read through a bevy of answers and discussion, deciding for ourselves who has come up with the best solution. Where am I going with this? Excellent question. Since we all ask questions in different ways isn't it great that we have places to go that let us ask questions and get answers in a way that's best suited to our individual preferences. Do you want the long-running discussion format? Then you should be hanging out on the forums over at SQL Server Central. Do you want specific answers with direct peer evaluation of the strength of those answers? Then you should be hanging out on the forums over at Ask SQL Server Central. You can get answers to your questions, and do it in a way that's most comfortable for you.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • Change base language of Windows 8 Installer

    - by Firedragon
    I have access to Dreamspark and Windows 8. When I picked the version I picked English which is fine, but it is US English however I realised I should have picked UK English instead. You are only allowed one version so i cannot switch it. Now, I can change the language pack later to UK English but in the language bar US English is always listed and seems impossible to remove and system restore reverts to US English. Is there a way to fully change the base language to UK in the installer, so in effect makin the installer offer US and UK English, or just UK English as if I had chosen the correct version?

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • Jquery mobile email form with php script

    - by Pablo Marino
    i'm working in a mobile website and i'm having problems with my email form The problem is that when I receive the mail sent through the form, it contains no data. All the data entered by the user is missing. Here is the form code <form action="correo.php" method="post" data-ajax="false"> <div> <p><strong>You can contact us by filling the following form</strong></p> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="first_name">First Name</label> <input id="first_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="last_name">Last name</label> <input id="last_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="email">Email</label> <input id="email" placeholder="" type="email" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="telephone">Phone</label> <input id="telephone" placeholder="" type="tel" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="age">Age</label> <input id="age" placeholder="" type="number" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="country">Country</label> <input id="country" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup"> <label for="message">Your message</label> <textarea id="message" placeholder="" data-mini="true"></textarea> </fieldset> </div> <input data-inline="true" data-theme="e" value="Submit" data-mini="true" type="submit" /> </form> Here is the PHP code <? /* aqui se incializan variables de PHP */ if (phpversion() >= "4.2.0") { if ( ini_get('register_globals') != 1 ) { $supers = array('_REQUEST', '_ENV', '_SERVER', '_POST', '_GET', '_COOKIE', '_SESSION', '_FILES', '_GLOBALS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } else { if ( ini_get('register_globals') != 1 ) { $supers = array('HTTP_POST_VARS', 'HTTP_GET_VARS', 'HTTP_COOKIE_VARS', 'GLOBALS', 'HTTP_SESSION_VARS', 'HTTP_SERVER_VARS', 'HTTP_ENV_VARS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } /* DE AQUI EN ADELANTE PUEDES EDITAR EL ARCHIVO */ /* reclama si no se ha rellenado el campo email en el formulario */ /* aquí se especifica la pagina de respuesta en caso de envío exitoso */ $respuesta="index.html#respuesta"; // la respuesta puede ser otro archivo, en incluso estar en otro servidor /* AQUÍ ESPECIFICAS EL CORREO AL CUAL QUEIRES QUE SE ENVÍEN LOS DATOS DEL FORMULARIO, SI QUIERES ENVIAR LOS DATOS A MÁS DE UN CORREO, LOS PUEDES SEPARAR POR COMAS */ $para ="[email protected];" /* this is not my real email, of course */ /* AQUI ESPECIFICAS EL SUJETO (Asunto) DEL EMAIL */ $sujeto = "Mail de la página - Movil"; /* aquí se construye el encabezado del correo, en futuras versiones del script explicaré mejor esta parte */ $encabezado = "From: $first_name $last_name <$email>"; $encabezado .= "\nReply-To: $email"; $encabezado .= "\nX-Mailer: PHP/" . phpversion(); /* con esto se captura la IP del que envío el mensaje */ $ip=$REMOTE_ADDR; /* las siguientes líneas arman el mensaje */ $mensaje .= "NOMBRE: $first_name\n"; $mensaje .= "APELLIDO: $last_name\n"; $mensaje .= "EMAIL: $email\n"; $mensaje .= "TELEFONO: $telephone\n"; $mensaje .= "EDAD: $age\n"; $mensaje .= "PAIS: $country\n"; $mensaje .= "MENSAJE: $message\n"; /* aqui se intenta enviar el correo, si no se tiene éxito se da un mensaje de error */ if(!mail($para, $sujeto, $mensaje, $encabezado)) { echo "<h1>No se pudo enviar el Mensaje</h1>"; exit(); } else { /* aqui redireccionamos a la pagina de respuesta */ echo "<meta HTTP-EQUIV='refresh' content='1;url=$respuesta'>"; } ?> Any help please? Thanks in advance Pablo

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • Visiting the Fire Station in Coromandel

    Hm, I just tried to remember how we actually came up with this cool idea... but it's already too blurred and it doesn't really matter after all. Anyway, if I remember correctly (IIRC), it happened during one of the Linux meetups at Mugg & Bean, Bagatelle where Ajay and I brought our children along and we had a brief conversation about how cool it would be to check out one of the fire stations here in Mauritius. We both thought that it would be a great experience and adventure for the little ones. An idea takes shape And there we go, down the usual routine these... having an idea, checking out the options and discussing who's doing what. Except this time, it was all up to Ajay, and he did a fantastic job. End of August, he told me that he got in touch with one of his friends which actually works as a fire fighter at the station in Coromandel and that there could be an option to come and visit them (soon). A couple of days later - Confirmed! Be there, and in time... What time? Anyway, doesn't really matter... Everything was settled and arranged. I asked the kids on Friday afternoon if they might be interested to see the fire engines and what a fire fighter is doing. Of course, they were all in! Getting up early on Sunday morning isn't really a regular exercise for all of us but everything went smooth and after a short breakfast it was time to leave. Where are we going? Are we there yet? Now, we are in Bambous. Why do you go this way? The kids were so much into it. Absolutely amazing to see their excitement. Are we there yet? Well, we went through the sugar cane fields towards Chebel and then down into the industrial zone at Coromandel. Honestly, I had a clue where the fire station is located but having Google Maps in reach that shouldn't be a problem in case that we might get lost. But my worries were washed away when our children guided us... "There! Over there are the fire engines! We have to turn left, dad." - No comment, the kids were right! As we were there a little bit too early, we parked the car and the kids started to explore the area and outskirts of the fire station. Some minutes later, as if we had placed an order a unit of two cars had to go out for an alarm and the kids could witness them leaving as closely as possible. Sirens on and wow!!! Ladder truck L32 - MAN truck with Rosenbauer built-up and equipment by Metz Taking the tour Ajay arrived shortly after that and guided us finally inside the station to meet with his pal. The three guys were absolutely well-prepared and showed us around in the hall, explaining that there two units out at the moment. But the ladder truck (with max. 32m expandable height) was still around we all got a great insight into the technique and equipment on the vehicle. It was amazing to see all three kids listening to Mambo as give some figures about the truck and how the fire fighters are actually it. The children and 'our' fire fighters of the day had great fun with the various fire engines Absolutely fantastic that the children were allowed to experience this - we had so much fun! Ajay's son brought two of his toy fire engines along, shared them with ours, and they all played very well together. As a parent it was really amazing to see them at such an ease. Enough theory Shortly afterwards the ladder truck was moved outside, got stabilised and ready to go for 'real-life' exercising. With the additional equipment of safety helmets, security belts and so on, we all got a first-hand impression about how it could be as a fire-fighter. Actually, I was totally amazed by the curiousity and excitement of my BWE. She was really into it and asked lots of interesting questions - in general but also technical. And while our fighters were busy with Ajay and family, I gave her some more details and explanations about the truck, the expandable ladder, the safety cage at the top and other equipment available. Safety first! No exceptions and always be prepared for the worst case... Also, the equipped has been checked prior to excuse - This is your life saver... Hooked up and ready to go... ...of course not too high. This is just a demonstration - and 32 meters above ground isn't for everyone. Well, after that it was me that had the asking looks on me, and I finally revealed to the local fire fighters that I was in the auxiliary fire brigade, more precisely in the hazard department, for more than 10 years. So not a professional fire fighter but at least a passionate and educated one as them. Inside the station Our fire fighters really took their time to explain their daily job to kids, provided them access to operation seat on the ladder truck and how the truck cabin is actually equipped with the different radios and so on. It was really a great time. Later on we had a brief tour through the building itself, and again all of our questions were answered. We had great fun and started to joke about bits and pieces. For me it was also very interesting to see the comparison between the fire station here in Mauritius and the ones I have been to back in Germany. Amazing to see them completely captivated in the play - the children had lots of fun! Also, that there are currently ten fire stations all over the island, plus two additional but private ones at the airport and at the harbour. The newest one is actually down in Black River on the west coast because the time from Quatre Bornes takes too long to have any chance of an effective alarm at all. IMHO, a very good decision as time is the most important factor in getting fire incidents under control. After all it was great experience for all of us, especially for the children to see and understand that their toy trucks are only copies of the real thing and that the job of a (professional) fire fighter is very important in our society. Don't forget that those guys run into the danger zone while you're trying to get away from it as much as possible. Another unit just came back from a grass fire - and shortly after they went out again. No time to rest, too much to do! Mauritian Fire Fighters now and (maybe) in the future... Thank you! It was an honour to be around! Thank you to Ajay for organising and arranging this Sunday morning event, and of course of Big Thank You to the three guys that took some time off to have us at the Fire Station in Coromandel and guide us through their daily job! And remember to call 115 in case of emergencies!

    Read the article

  • Oracle Enterprise Manager 12c Ops Center Jump-Start for Partners

    - by Get_Specialized!
    Following the Normal 0 false false false EN-US X-NONE X-NONE Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} announcement at Oracle OpenWorld Tokyo, Partners can check out these resources to further learn about Oracle Enterprise Manager 12c Op Center and then use it to optimize your solution/services or offer new ones: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Product Documentation Oracle Technical Network Resources Online Learning Series for Partners in the OPN Enterprise Manager KnowledgeZone Whitepaper Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Making Infrastructure-as-a-Service in the Enterprise a Reality IDC report: Oracle Enterprise Manager 12c Embraces the Cloud with Integrated Lifecycle Management Follow-up webcast April 12th  Total Cloud Control for Systems Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager Ops Center 12c is no extra charge and included in the support contract of Oracle Systems customers.To learn more see the Ops Center Everywhere Program And if you're not already a member, be sure and join the Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager KnowledgeZone on the Oracle PartnerNetwork  Portal

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Sector bancario, un reto de transformación tecnológica

    - by Fabian Gradolph
    El sector financiero se encuentra en un momento clave. No sólo por la coyuntura económica actual, sino también por cuestiones estructurales y normativas que obligan a las entidades bancarias -normalmente a la cabeza de la innovación tecnológica, por cierto- a seguir dando pasos hacia el futuro, manteniendo la tecnología en el corazón de su estrategia de negocio. Así se ha puesto de manifiesto en el encuentro que se ha celebrado hoy en Madrid: Oracle in Banking, donde expertos de Oracle, clientes de la compañía y analistas han puesto sobre la mesa algunos de los desafíos a los que se enfrenta el sector e ideas para aprovechar al máximo la tecnología en la resolución de estos desafíos. El evento ha sido todo un éxito, con asistencia masiva de clientes y partners. En la imagen que ilustra este artículo pueden verse, por este orden: una panorámica de la sala, Modesto Villajos, Regional Sales Manager de Oracle, quien ejerció de maestro de ceremonias. Leopoldo Boado, Country Manager de Oracle España, quien realizó la introducción, Alex Kwiatkowski, de IDC, quien expuso los prinicipales desafíos a los que se enfrenta la banca, y Máximo Díez, Senior Director Financial Services de Oracle, que planteó las diferentes estrategias de transformación que pueden emprender los bancos. El evento se completó con intervenciones de clientes de Oracle (Banco Espírito Santo -BES- de Portugal; y BBVA, de España), y presentaciones y demostraciones técnicas.  De particular interés fue la intervención de Alex Kwiatkowski. De acuerdo con su punto de vista hay cuatro áreas esenciales a las que se enfrenta el sector. La primera de ellas es el marco regulatorio. El sector financiero está sometido a una constante presión normativa (probablemente acrecentada en estos tiempos de incertidumbre), no sólo a nivel nacional, sino también a nivel europeo y global. El cumplimiento exquisito de todas estas normas es esencial para el buen funcionamiento del sistema. La segunda área crítica es la necesidad de ofrecer una experiencia de usuario multicanal satisfactoria, de forma que se potencie la retención de clientes. A veces es difícil darnos cuenta, pero hoy en día nuestras interacciones con el banco han alcanzado una gran diversidad de canales (sucursal, ATM, Internet, banca telefónica, banca móvil...). Esto supone un permanente desafío tecnológico y de procesos para las entidades financieras. El tercer elemento crítico es el del incremento de la eficiencia de las operaciones, manteniendo los costes bajo control o incluso reduciéndolos aún más. Por último, las entidades bancarias tienen ante sí el reto de encontrar nuevas fuentes de ingresos, de forma que el foco deje de estar únicamente en la reducción de costes y la minimización de riesgos. Lo cierto es que en la actualidad, la atención principal se centra en estos dos puntos, pero como mencionó Alex Kwiatkowski "los CIO`s de los bancos se van a plantar en la mesa del CEO con la necesidad de realizar renovaciones completas de los sistemas de core banking y la necesidad de invertir en el desarrollo de nuevos canales". Máximo Díez también enfatizó esta necesidad en su presentación. Los bancos tienen la obligación de econtrar nuevas fórmulas para impulsar el crecimiento, pero la implementación de estrategias en este sentido presenta fuertes desafíos a causa de las limitaciones de los sistemas IT existentes. No hay duda de que se presenta un futuro muy interesante en el ámbito tecnológico para el sector financiero. Lo que Oracle puede hacer y ofrece a las entidades financieras puede encontrarse en este enlace: Financial Services.

    Read the article

  • Why is this giving me 2 different sets of timezones?

    - by chobo2
    Hi I have this line to get all the timezones Dictionary<string, TimeZoneInfo> storeZoneName = TimeZoneInfo.GetSystemTimeZones().ToDictionary(z => z.DisplayName); Now when I upload I try it on my local machine I get this (UTC-12:00) International Date Line West (UTC-11:00) Coordinated Universal Time-11 (UTC-11:00) Samoa (UTC-10:00) Hawaii (UTC-09:00) Alaska (UTC-08:00) Baja California (UTC-08:00) Pacific Time (US & Canada) (UTC-07:00) Arizona (UTC-07:00) Chihuahua, La Paz, Mazatlan (UTC-07:00) Mountain Time (US & Canada) (UTC-06:00) Central America (UTC-06:00) Central Time (US & Canada) (UTC-06:00) Guadalajara, Mexico City, Monterrey (UTC-06:00) Saskatchewan (UTC-05:00) Bogota, Lima, Quito (UTC-05:00) Eastern Time (US & Canada) (UTC-05:00) Indiana (East) (UTC-04:30) Caracas (UTC-04:00) Asuncion (UTC-04:00) Atlantic Time (Canada) (UTC-04:00) Cuiaba (UTC-04:00) Georgetown, La Paz, Manaus, San Juan (UTC-04:00) Santiago (UTC-03:30) Newfoundland (UTC-03:00) Brasilia (UTC-03:00) Buenos Aires (UTC-03:00) Cayenne, Fortaleza (UTC-03:00) Greenland (UTC-03:00) Montevideo (UTC-02:00) Coordinated Universal Time-02 (UTC-02:00) Mid-Atlantic (UTC-01:00) Azores (UTC-01:00) Cape Verde Is. (UTC) Casablanca (UTC) Coordinated Universal Time (UTC) Dublin, Edinburgh, Lisbon, London (UTC) Monrovia, Reykjavik (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna (UTC+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague (UTC+01:00) Brussels, Copenhagen, Madrid, Paris (UTC+01:00) Sarajevo, Skopje, Warsaw, Zagreb (UTC+01:00) West Central Africa (UTC+02:00) Amman (UTC+02:00) Athens, Bucharest, Istanbul (UTC+02:00) Beirut (UTC+02:00) Cairo (UTC+02:00) Harare, Pretoria (UTC+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius (UTC+02:00) Jerusalem (UTC+02:00) Minsk (UTC+02:00) Windhoek (UTC+03:00) Baghdad (UTC+03:00) Kuwait, Riyadh (UTC+03:00) Moscow, St. Petersburg, Volgograd (UTC+03:00) Nairobi (UTC+03:30) Tehran (UTC+04:00) Abu Dhabi, Muscat (UTC+04:00) Baku (UTC+04:00) Port Louis (UTC+04:00) Tbilisi (UTC+04:00) Yerevan (UTC+04:30) Kabul (UTC+05:00) Ekaterinburg (UTC+05:00) Islamabad, Karachi (UTC+05:00) Tashkent (UTC+05:30) Chennai, Kolkata, Mumbai, New Delhi (UTC+05:30) Sri Jayawardenepura (UTC+05:45) Kathmandu (UTC+06:00) Astana (UTC+06:00) Dhaka (UTC+06:00) Novosibirsk (UTC+06:30) Yangon (Rangoon) (UTC+07:00) Bangkok, Hanoi, Jakarta (UTC+07:00) Krasnoyarsk (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi (UTC+08:00) Irkutsk (UTC+08:00) Kuala Lumpur, Singapore (UTC+08:00) Perth (UTC+08:00) Taipei (UTC+08:00) Ulaanbaatar (UTC+09:00) Osaka, Sapporo, Tokyo (UTC+09:00) Seoul (UTC+09:00) Yakutsk (UTC+09:30) Adelaide (UTC+09:30) Darwin (UTC+10:00) Brisbane (UTC+10:00) Canberra, Melbourne, Sydney (UTC+10:00) Guam, Port Moresby (UTC+10:00) Hobart (UTC+10:00) Vladivostok (UTC+11:00) Magadan, Solomon Is., New Caledonia (UTC+12:00) Auckland, Wellington (UTC+12:00) Coordinated Universal Time+12 (UTC+12:00) Fiji (UTC+12:00) Petropavlovsk-Kamchatsky (UTC+13:00) Nuku'alofa When I run it on a different local machine or my server I have this. <option value="(GMT) Casablanca">(GMT) Casablanca</option> <option value="(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London">(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London</option> <option value="(GMT) Monrovia, Reykjavik">(GMT) Monrovia, Reykjavik</option> <option value="(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna">(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna</option> <option value="(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague">(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague</option> <option value="(GMT+01:00) Brussels, Copenhagen, Madrid, Paris">(GMT+01:00) Brussels, Copenhagen, Madrid, Paris</option> <option value="(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb">(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb</option> <option value="(GMT+01:00) West Central Africa">(GMT+01:00) West Central Africa</option> <option value="(GMT+02:00) Amman">(GMT+02:00) Amman</option> <option value="(GMT+02:00) Athens, Bucharest, Istanbul">(GMT+02:00) Athens, Bucharest, Istanbul</option> <option value="(GMT+02:00) Beirut">(GMT+02:00) Beirut</option> <option value="(GMT+02:00) Cairo">(GMT+02:00) Cairo</option> <option value="(GMT+02:00) Harare, Pretoria">(GMT+02:00) Harare, Pretoria</option> <option value="(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius">(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius</option> <option value="(GMT+02:00) Jerusalem">(GMT+02:00) Jerusalem</option> <option value="(GMT+02:00) Minsk">(GMT+02:00) Minsk</option> <option value="(GMT+02:00) Windhoek">(GMT+02:00) Windhoek</option> <option value="(GMT+03:00) Baghdad">(GMT+03:00) Baghdad</option> <option value="(GMT+03:00) Kuwait, Riyadh">(GMT+03:00) Kuwait, Riyadh</option> <option value="(GMT+03:00) Moscow, St. Petersburg, Volgograd">(GMT+03:00) Moscow, St. Petersburg, Volgograd</option> <option value="(GMT+03:00) Nairobi">(GMT+03:00) Nairobi</option> <option value="(GMT+03:00) Tbilisi">(GMT+03:00) Tbilisi</option> <option value="(GMT+03:30) Tehran">(GMT+03:30) Tehran</option> <option value="(GMT+04:00) Abu Dhabi, Muscat">(GMT+04:00) Abu Dhabi, Muscat</option> <option value="(GMT+04:00) Baku">(GMT+04:00) Baku</option> <option value="(GMT+04:00) Port Louis">(GMT+04:00) Port Louis</option> <option value="(GMT+04:00) Yerevan">(GMT+04:00) Yerevan</option> <option value="(GMT+04:30) Kabul">(GMT+04:30) Kabul</option> <option value="(GMT+05:00) Ekaterinburg">(GMT+05:00) Ekaterinburg</option> <option value="(GMT+05:00) Islamabad, Karachi">(GMT+05:00) Islamabad, Karachi</option> <option value="(GMT+05:00) Tashkent">(GMT+05:00) Tashkent</option> <option value="(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi">(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi</option> <option value="(GMT+05:30) Sri Jayawardenepura">(GMT+05:30) Sri Jayawardenepura</option> <option value="(GMT+05:45) Kathmandu">(GMT+05:45) Kathmandu</option> <option value="(GMT+06:00) Almaty, Novosibirsk">(GMT+06:00) Almaty, Novosibirsk</option> <option value="(GMT+06:00) Astana, Dhaka">(GMT+06:00) Astana, Dhaka</option> <option value="(GMT+06:30) Yangon (Rangoon)">(GMT+06:30) Yangon (Rangoon)</option> <option value="(GMT+07:00) Bangkok, Hanoi, Jakarta">(GMT+07:00) Bangkok, Hanoi, Jakarta</option> <option value="(GMT+07:00) Krasnoyarsk">(GMT+07:00) Krasnoyarsk</option> <option value="(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi">(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi</option> <option value="(GMT+08:00) Irkutsk, Ulaan Bataar">(GMT+08:00) Irkutsk, Ulaan Bataar</option> <option value="(GMT+08:00) Kuala Lumpur, Singapore">(GMT+08:00) Kuala Lumpur, Singapore</option> <option value="(GMT+08:00) Perth">(GMT+08:00) Perth</option> <option value="(GMT+08:00) Taipei">(GMT+08:00) Taipei</option> <option value="(GMT+09:00) Osaka, Sapporo, Tokyo">(GMT+09:00) Osaka, Sapporo, Tokyo</option> <option value="(GMT+09:00) Seoul">(GMT+09:00) Seoul</option> <option value="(GMT+09:00) Yakutsk">(GMT+09:00) Yakutsk</option> <option value="(GMT+09:30) Adelaide">(GMT+09:30) Adelaide</option> <option value="(GMT+09:30) Darwin">(GMT+09:30) Darwin</option> <option value="(GMT+10:00) Brisbane">(GMT+10:00) Brisbane</option> <option value="(GMT+10:00) Canberra, Melbourne, Sydney">(GMT+10:00) Canberra, Melbourne, Sydney</option> <option value="(GMT+10:00) Guam, Port Moresby">(GMT+10:00) Guam, Port Moresby</option> <option value="(GMT+10:00) Hobart">(GMT+10:00) Hobart</option> <option value="(GMT+10:00) Vladivostok">(GMT+10:00) Vladivostok</option> <option value="(GMT+11:00) Magadan, Solomon Is., New Caledonia">(GMT+11:00) Magadan, Solomon Is., New Caledonia</option> <option value="(GMT+12:00) Auckland, Wellington">(GMT+12:00) Auckland, Wellington</option> <option value="(GMT+12:00) Fiji, Kamchatka, Marshall Is.">(GMT+12:00) Fiji, Kamchatka, Marshall Is.</option> <option value="(GMT+13:00) Nuku'alofa">(GMT+13:00) Nuku'alofa</option> <option value="(GMT-01:00) Azores">(GMT-01:00) Azores</option> <option value="(GMT-01:00) Cape Verde Is.">(GMT-01:00) Cape Verde Is.</option> <option value="(GMT-02:00) Mid-Atlantic">(GMT-02:00) Mid-Atlantic</option> <option value="(GMT-03:00) Brasilia">(GMT-03:00) Brasilia</option> <option value="(GMT-03:00) Buenos Aires">(GMT-03:00) Buenos Aires</option> <option value="(GMT-03:00) Georgetown">(GMT-03:00) Georgetown</option> <option value="(GMT-03:00) Greenland">(GMT-03:00) Greenland</option> <option value="(GMT-03:00) Montevideo">(GMT-03:00) Montevideo</option> <option value="(GMT-03:30) Newfoundland">(GMT-03:30) Newfoundland</option> <option value="(GMT-04:00) Atlantic Time (Canada)">(GMT-04:00) Atlantic Time (Canada)</option> <option value="(GMT-04:00) La Paz">(GMT-04:00) La Paz</option> <option value="(GMT-04:00) Manaus">(GMT-04:00) Manaus</option> <option value="(GMT-04:00) Santiago">(GMT-04:00) Santiago</option> <option value="(GMT-04:30) Caracas">(GMT-04:30) Caracas</option> <option value="(GMT-05:00) Bogota, Lima, Quito, Rio Branco">(GMT-05:00) Bogota, Lima, Quito, Rio Branco</option> <option value="(GMT-05:00) Eastern Time (US &amp; Canada)">(GMT-05:00) Eastern Time (US &amp; Canada)</option> <option value="(GMT-05:00) Indiana (East)">(GMT-05:00) Indiana (East)</option> <option value="(GMT-06:00) Central America">(GMT-06:00) Central America</option> <option value="(GMT-06:00) Central Time (US &amp; Canada)">(GMT-06:00) Central Time (US &amp; Canada)</option> <option value="(GMT-06:00) Guadalajara, Mexico City, Monterrey">(GMT-06:00) Guadalajara, Mexico City, Monterrey</option> <option value="(GMT-06:00) Saskatchewan">(GMT-06:00) Saskatchewan</option> <option value="(GMT-07:00) Arizona">(GMT-07:00) Arizona</option> <option value="(GMT-07:00) Chihuahua, La Paz, Mazatlan">(GMT-07:00) Chihuahua, La Paz, Mazatlan</option> <option value="(GMT-07:00) Mountain Time (US &amp; Canada)">(GMT-07:00) Mountain Time (US &amp; Canada)</option> <option value="(GMT-08:00) Pacific Time (US &amp; Canada)">(GMT-08:00) Pacific Time (US &amp; Canada)</option> <option value="(GMT-08:00) Tijuana, Baja California">(GMT-08:00) Tijuana, Baja California</option> <option value="(GMT-09:00) Alaska">(GMT-09:00) Alaska</option> <option value="(GMT-10:00) Hawaii">(GMT-10:00) Hawaii</option> <option value="(GMT-11:00) Midway Island, Samoa">(GMT-11:00) Midway Island, Samoa</option> <option value="(GMT-12:00) International Date Line West">(GMT-12:00) International Date Line West</option> They are different. Same line of code but one is GMT and one is UTC. How can I force it to be always the same? Also I want to have a default choice of "UTC" but I am not sure what the diff is between this (UTC-11:00) Coordinated Universal Time-11 and this (UTC-02:00) Coordinated Universal Time-02

    Read the article

  • Delegate.CreateDelegate() and generics: Error binding to target method

    - by SDReyes
    I'm having problems creating a collection of delegate using reflection and generics. I'm trying to create a delegate collection from Ally methods, whose share a common method signature. public class Classy { public string FirstMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); public string SecondMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); public string ThirdMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); // And so on... } And the generics cooking: // This is the Classy's shared method signature public delegate string classyDelegate<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> filter ); // And the linq-way to get the collection of delegates from Classy ( from method in typeof( Classy ).GetMethods( BindingFlags.Instance | BindingFlags.DeclaredOnly | BindingFlags.NonPublic ) let delegateType = typeof( classyDelegate<,> ) select Delegate.CreateDelegate( delegateType, method ) ).ToList( ); But the Delegate.CreateDelegate( delegateType, method ) throws an ArgumentException saying Error binding to target method. : / What am I doing wrong?

    Read the article

  • Logging in worker threads spawned from a pylons application does not seem to work

    - by TimM
    I have a pylons application where, under certain cirumstances I want to spawn multiple worker threads to process items in a queue. Right now we aren't making use of a ThreadPool (would be ideal, but we'll add that in later). The main problem is that the worker threads logging does not get written to the log files. When I run the code outside of the pylons application the logging works fine. So I think its something to do with the pylons log handler but not sure what. Here is a basic example of the code (trimmed down): import logging log = logging.getLogger(__name__) import sys from Queue import Queue from threading import Thread, activeCount def run(input, worker, args = None, simulteneousWorkerLimit = None): queue = Queue() threads = [] if args is not None: if len(args) > 0: args = list(args) args = [worker, queue] + args args = tuple(args) else: args = (worker, queue) # start threads for i in range(4): t = Thread(target = __thread, args = args) t.daemon = True t.start() threads.append(t) # add ThreadTermSignal inputData = list(input) inputData.extend([ThreadTermSignal] * 4) # put in the queue for data in inputData: queue.put(data) # block until all contents are downloaded queue.join() log.critical("** A log line that appears fine **") del queue for thread in threads: del thread del threads class ThreadTermSignal(object): pass def __thread(worker, queue, *args): try: while True: data = queue.get() if data is ThreadTermSignal: sys.exit() try: log.critical("** I don't appear when run under pylons **") finally: queue.task_done() except SystemExit: queue.task_done() pass Take note, that the log lin within the RUN method will show up in the log files, but the log line within the worker method (which is run in a spawned thread), does not appear. Any help would be appreciated. Thanks ** EDIT: I should mention that I tried passing in the "log" variable to the worker thread as well as redefining a new "log" variable within the thread and neither worked. ** EDIT: Adding the configuration used for the pylons application (which comes out of the INI file). So the snippet below is from the INI file. [loggers] keys = root [handlers] keys = wsgierrors [formatters] keys = generic [logger_root] level = WARNING handlers = wsgierrors [handler_console] class = StreamHandler args = (sys.stderr,) level = WARNING formatter = generic [handler_wsgierrors] class = pylons.log.WSGIErrorsHandler args = () level = WARNING format = generic

    Read the article

  • NHibernate 2.1 MsSql2000Dialect error

    - by Daniel Dolz
    Hi. I had an old (but great) app using NHibernate 1.0.2. Worked like a charm. But then I decided to upgrade to NHibernate 2.1.2. Had to change some stuff, worked great also. Problem is, I founded out that new version works in some machines and do not work in others. What the heck? Thinking a while, I discovered that it only works in pcs with SQL 2000 installed!! previous version used to works everywhere.... Check out a piece of my exception, it has to do with mssql2000Dialect NHibernate.MappingException: Could not compile the mapping document: Datos.NH_VEN_ComprobanteBF.hbm.xml ---> NHibernate.HibernateException: Could not instantiate dialect class NHibernate.Dialect.MsSql2000Dialect ---> System.Reflection.TargetInvocationException: Se produjo una excepción en el destino de la invocación. ---> System.TypeInitializationException: Se produjo una excepción en el inicializador de tipo de 'NHibernate.NHibernateUtil'. ---> System.TypeLoadException: No se puede cargar el tipo 'System.DateTimeOffset' del ensamblado'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. en NHibernate.Type.DateTimeOffsetType.get_ReturnedClass() en NHibernate.NHibernateUtil..cctor() --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Dialect.Dialect..ctor() en NHibernate.Dialect.MsSql2000Dialect..ctor() --- Fin del seguimiento de la pila de la excepción interna --- en System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) en System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) Could you help? Thanks!!!!

    Read the article

  • Are there any libraries for showing diffs between two web pages?

    - by Chad Johnson
    I am looking for a library in any language--preferably PHP though--that will display the difference between two web pages. The differences can be displayed side-by-side, all in one document, or in any other creative way. Examples of what this would look like: http://1.bp.blogspot.com/_pLC3YDiv_I4/SBZPYQMDsPI/AAAAAAAAADk/wUMxK307jXw/s1600-h/wikipediadiff.jpg http://www.rohland.co.za/wp-content/uploads/2009/10/html_diff_output_text.PNG I am NOT looking for raw code diffing, like this: http://thinkingphp.org/img/code_coverage_html_diff_view.png. I do NOT want to show the difference between two sets of HTML. I want to show differences in rendered, WYSIWYG form. Every solution I tried suffered from one or more of the following problems: If I change the attribute of an element (eg. change [table border="1"] to [table border="2"]), then I'll have an extra table tag in the output (eg. [table border="1"][table border="1"][tr][td]...). And, one table tag will have a del tag around it, while the other will have an ins tag around it, and that will obviously cause problems. If I change [html][body][b]some content here[/b][/body][/html] to [html][body][i]some other content here[/i][/body][/html] then it looks like [html][body][b][del]original[/del][i][ins]new[/ins] content here[/b][/i][/body][/html] I'm looking for out-of-the-box ideas. Any ideas are welcome.

    Read the article

  • PHP Arrays: Loop trough array with a lot of conditional statements. Help / Best practices

    - by Jonathan
    Hi, I have a problem I don't know how to get it to work the best way. I need to loop trough an array like the one below. I need to check if the [country] index is equal to a Spanish speaking country (lot of countries) and then get those [title] indexes of the correspondent country and check for duplicates. The original array: Array ( [0] => Array ( [title] => Jeux de pouvoir [country] => France ) [1] => Array ( [title] => Los secretos del poder [country] => Argentina ) [2] => Array ( [title] => Los secretos del poder [country] => Mexico ) [3] => Array ( [title] => El poder secreto [country] => Uruguay ) ) To help you understand, the final result I need to get looks something like this: Array ( [0] => Array ( [title] => Los secretos del poder [country] => Argetnina, Mexico ) [1] => Array ( [title] => El poder secreto [country] => Uruguay ) )

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Is safe ( documented behaviour? ) to delete the domain of an iterator in execution

    - by PoorLuzer
    I wanted to know if is safe ( documented behaviour? ) to delete the domain space of an iterator in execution in Python. Consider the code: import os import sys sampleSpace = [ x*x for x in range( 7 ) ] print sampleSpace for dx in sampleSpace: print str( dx ) if dx == 1: del sampleSpace[ 1 ] del sampleSpace[ 3 ] elif dx == 25: del sampleSpace[ -1 ] print sampleSpace 'sampleSpace' is what I call 'the domain space of an iterator' ( if there is a more appropriate word/phrase, lemme know ). What I am doing is deleting values from it while the iterator 'dx' is running through it. Here is what I expect from the code : Iteration versus element being pointed to (*): 0: [*0, 1, 4, 9, 16, 25, 36] 1: [0, *1, 4, 9, 16, 25, 36] ( delete 2nd and 5th element after this iteration ) 2: [0, 4, *9, 25, 36] 3: [0, 4, 9, *25, 36] ( delete -1th element after this iteration ) 4: [0, 4, 9, 25*] ( as the iterator points to nothing/end of list, the loop terminates ) .. and here is what I get: [0, 1, 4, 9, 16, 25, 36] 0 1 9 25 [0, 4, 9, 25] As you can see - what I expect is what I get - which is contrary to the behaviour I have had from other languages in such a scenario. Hence - I wanted to ask you if there is some rule like "the iterator becomes invalid if you mutate its space during iteration" in Python? Is it safe ( documented behaviour? ) in Python to do stuff like this?

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >