Search Results

Search found 2000 results on 80 pages for 'stackover flow'.

Page 58/80 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Cooling for a small server room

    - by John Zwinck
    I have a server room about 12 feet square with an unfinished ceiling (exposed ducts and wiring). It houses a few servers (about ten, 1U and 2U) and some networking gear (four 1U switches, three routers, three modems, two cable boxes). With the door closed, it runs around 80 degrees Fahrenheit with half the servers turned on. When I turned on all the servers it reached 86 before I chickened out and propped the door open. The room is adjacent to air-conditioned office space, but does not itself have dedicated air conditioning. The ventilation for this room seems to be limited to one duct coming in at ceiling level, with a powered fan to draw air in, and one duct at ceiling level to allow air to flow out (it seems like it may just go into the drop ceiling cavity in the adjacent room). The adjacent office space stays fairly cool, but I'd prefer not to leave the door propped open all the time. There is both 110v and 208v service in the room, and plenty of power available. But there are no windows, and no floor drains (in a pinch we might be able to run a condensation hose through a small hole we'd drill in the wall to a nearby sink area, but only if absolutely necessary). I've considered portable A/C units, but I'm not sure on sizing and a lot less sure how we would run the exhaust hose(s). I suppose we could point one at the existing room exhaust duct (air return), but substantially modifying the duct is probably a no-no. I've also considered installing a fan box in the door of the room, but I'm concerned that this will only drop the temperature a little. Even right now, with all the equipment on, the room is at 83 degrees with the door open. And the main building A/C turns off daily at 6 PM to conserve energy, so the adjacent room temperature rises at night. How would you cool this room? Let's say the goal is to bring the temperature with everything running from a steady state of around 90 degrees down to 75 (equivalently, to offset the heat produced by ten 1U servers).

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • Multiple Set Peer for VPN Failover

    - by Kyle Brandt
    I will have two Cisco routers at Location A serving the same internal networks, and one router in location B. Currently, I have one router in each location with a IPSec site-to-site tunnel connecting them. It looks something like: Location A: crypto map crypto-map-1 1 ipsec-isakmp description Tunnel to Location B set peer 12.12.12.12 set transform-set ESP-3DES-SHA match address internal-ips Location B: crypto map crypto-map-1 1 ipsec-isakmp description Tunnel to Location A set peer 11.11.11.11 set transform-set ESP-3DES-SHA match address internal-ips Can I achieve fail over by simply adding another set peer at location B?: Location A (New secondary Router, configuration on previous router stays the same): crypto map crypto-map-1 1 ipsec-isakmp description Tunnel to Location B set peer 12.12.12.12 set transform-set ESP-3DES-SHA match address internal-ips Location B (Configuration Changed): crypto map crypto-map-1 1 ipsec-isakmp description Tunnel to Location A set peer 11.11.11.11 ! 11.11.11.100 is the ip of the new second router at location A set peer 11.11.11.100 set transform-set ESP-3DES-SHA match address internal-ips Cisco Says: For crypto map entries created with the crypto map map-name seq-num ipsec-isakmp command, you can specify multiple peers by repeating this command. The peer that packets are actually sent to is determined by the last peer that the router heard from (received either traffic or a negotiation request from) for a given data flow. If the attempt fails with the first peer, Internet Key Exchange (IKE) tries the next peer on the crypto map list. But I don't fully understand that in the context of a failover scenerio (One of the routers as Location A blowing up).

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Windows 7 task bar stuck in hiding, how to fix?

    - by Rainer Blome
    In Windows 7, I use the "Auto-hide the task bar" feature. Usually, it works fine: As soon as the pointer touches the screen bottom, the task bar pops up. However sometimes, it refuses to rise. Pressing the "Windows" key (or Ctrl-ESC) makes the start menu appear, forcing the task bar from hinding as well. Once I've done this, the task-bar auto-rises again. This is annoying, it interrupts flow. Has anyone else noticed this? How do I avoid this? Searching for "Windows 7 task bar auto-raise" shows that at least one other person experienced this problem: http://answers.microsoft.com/en-us/windows/forum/windows_7-desktop/how-can-i-fix-the-taskbars-auto-hide/8cdf6369-7354-4d29-9249-b7096ed0e28b?msgId=6dac3361-9d0f-4a9e-8642-b91a72826ba4 To answer the question posed by the "helpful" support engineer on the above page, of course I am running some apps when this happens, usually Explorer, Firefox, Eclipse, Cygwin/X, Xterm, Emacs, Notes, VPN client, Firewall. If my memory serves correctly, I have seen this behavior on earlier versions of Windows as well, XP at least. To reproduce this behavior, I tried switching between apps, and bringing apps to open other windows. I am unable to reproduce this behavior so far. So far, it appears to happen out of the blue, sometimes multiple times a day. Looks like a bug to me. The task bar should raise no matter what.

    Read the article

  • AWS VPC public web application connecting to database via VPN

    - by Chris
    What I am trying to do is set up a web application that is public facing but makes calls to a database that is on an internal network. I have been trying to set up an AWS VPC with a public subnet, private subnet, and hardware VPN access but I can't seem to get it to work. Can someone help me understand what the process flow here should be? My understanding is that I need a public subnet to handle the website requests and then a private subnet to connect to the VPN but what I do not understand is how to send requests down the chain and get the response. Basically what I am asking is how can I query the database via VPN from that public website? I've tried during rout forwarding but I can't successfully complete the process. Does anyone have any advice on something I can read on this subject or an FAQ on setting something like this up? Is it even possible? I'm out of my league here, this is not my area of expertise but I'm being asked to solve this problem. Any help would be appreciated. Thanks

    Read the article

  • Can Current Backflow from Powered Hub's Adapter & cause PC Damage?

    - by SuperUserMan
    Getting this short: Can current flow from a powered USB hub's power adapter (lying 10 Meter away) back to computer via usb port and cause damage to Computer components like mobo, etc? What should be my concerns? Using a 2 Amp 5V Power adapter to power a 10m Long Active Repeater USB extension cable with 4 port HUB & plugging into PC's Front port, causes PC Chassis fan to keep running (thought slower than regular speed) Front Chassis HDD & power LED to turn on (though bit dim) may be other things which i cant detect/see at chip level, in motherboard?? All this even after PC is shut down (bit scary) More detail (in case still want to read): To run 4 High power (needing 450 mAmps) Wifi Adapters, far away from PC, Bought Active Repeater USB Extension Cable with 4 Ports & power port at far end http://www.ebay.com/itm/33FT-USB-2-0-Male-to-Female-Extension-Cable-Hub-Splitter-Adapter-with-4-USB-Port-/390846115254 Then added a locally bought 2 Amp 240V AC to 5V DC Power Adapter and plugged into USB hub which is a part of & situated at far end of a 10 Meter Active Repeater usb extension cable. Even 4 Wifi Adapters run fine (appear to) using this setup, but running chassis fan, dimly lighted Power & HDD LED, even when PC is switched off is bit scary and surely mean 5V & some current is flowing all though that 10 meter extension cable into my USB port & powering stuff. Can this cause damage? and what should be my concerns. Of course I can't switch off the power adapter (lying 10 meters away from PC) every time I switch off my PC to prevent this.

    Read the article

  • ESXi 5 VM Putty session hangs, vSphere client timing out

    - by user192702
    First of all I believe this is a ESXi issue but let me know if you have seen this. It started about a year ago when I noticed occasionally when I putty via SSH to my VM guests, if I do anything that makes it to display a lot of things at once, the session will hang and I have to start a new one quite often only to find the same behaviour. What I meant by display a lot of things can be any of the following: 1) tail -f filename 2) Paste a long command 3) less filename If I type in one character at a time this won't happen. I tried searching online and it always point me to flow control settings and the various suggestions I've tried have never been able to resolve the issue. Since last week, I've noticed I'm not able to connect to my POP3 server from Outlook (it's timing out from Outlook's perspective). Today I tried to connect to the ESXi via vSphere client and it gives me a time out also. Exact behavior and error I saw is similar to the one posted at the following URL but the suggested technique also failed to resolve the issue. http://davidcocke.blogspot.hk/2012/02/unable-to-login-with-vsphere-client.html Has anyone experienced this before? Any suggestions on how to troubleshoot this?

    Read the article

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • Encrypting peer-to-peer application with iptables and stunnel

    - by Jonathan Oliver
    I'm running legacy applications in which I do not have access to the source code. These components talk to each other using plaintext on a particular port. I would like to be able to secure the communications between the two or more nodes using something like stunnel to facilitate peer-to-peer communication rather than using a more traditional (and centralized) VPN package like OpenVPN, etc. Ideally, the traffic flow would go like this: app@hostA:1234 tries to open a TCP connection to app@hostB:1234. iptables captures and redirects the traffic on port 1234 to stunnel running on hostA at port 5678. stunnel@hostA negotiates and establishes a connection with stunnel@hostB:4567. stunnel@hostB forwards any decrypted traffic to app@hostB:1234. In essence, I'm trying to set this up to where any outbound traffic (generated on the local machine) to port N forwards through stunnel to port N+1, and the receiving side receives on port N+1, decrypts, and forwards to the local application at port N. I'm not particularly concerned about losing the hostA origin IP address/machine identity when stunnel@hostB forwards to app@hostB because the communications payload contains identifying information. The other trick in this is that normally with stunnel you have a client/server architecture. But this application is much more P2P because nodes can come and go dynamically and hard-coding some kind of "connection = hostN:port" in the stunnel configuration won't work.

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • Windows 7 hangs with 100% disk activity but only when online

    - by jeremy
    I have the same problem as seemingly many other people here, and I think we might all be experiencing the same issue: a compatibility issue in Windows 7 between hard drive and network controller or drivers. I've tried firmware updates of my entire board, wiping my drive and reinstalling from scratch. And yet the problem persists, which suggests it is an operating system error, as the hard drive checks out 100% physically. Additionally, the only time it does not occur is when in safe mode WITHOUT networking. With networking, there are spikes in disc access every so often and a huge flow of processes accessing the disc simultaneously that literally "stick" the disc, and physically jolting my computer unsticks it. Again, this has been tested for hours in a professional service environment, and without network access on, things are fine. As soon as there's network access available, the disc access occasionally cranks up to 100% and sticks everything. I'm using Microsoft Security Essentials, but this also happened under Norton, then McAfee. Again, this happened again after a complete wipe, so the likelihood of malware causing it seems low. I don't visit unsecure sites anyway, as far as I know. This, to me, narrows it down to a Windows 7 process that is somehow repeatedly corrupted, perhaps a corrupt .dll or driver, causing a conflict at the operating system level and temporary hard drive failure. I would encourage anyone who knows more about this stuff (which is probably most people!) to take a shot at this one, and I would encourage anyone else with a sticking hard drive in windows 7 64-bit to check on whether it occurs during safe mode without networking.

    Read the article

  • Linux boot - stop the kernel switching to a new framebuffer mode clearing output

    - by Avio
    I'm working on an embedded system (based onUbuntu 12.04 LTS) and I'm customizing its kernel. I'm having some problem with upstart, mountall and plymouth. Nothing unsolvable I suppose, but the real problem is that I can't diagnose properly what's going on because the kernel (or maybe plymouth) changes the video mode in the middle of the boot process. This completely wipes entire lines of log and prevents any debugging of kernel misconfigurations. My Grub2 config seems to be ok with: GRUB_CMDLINE_LINUX="" GRUB_CMDLINE_LINUX_DEFAULT="acpi=force noplymouth" GRUB_GFXMODE=1024x768x32 GRUB_GFXPAYLOAD_LINUX=keep Here is some relevant output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) And here is the relevant portion of my kernel configuration: CONFIG_AGP=y CONFIG_AGP_INTEL=y CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=16 CONFIG_DRM=y CONFIG_DRM_KMS_HELPER=y CONFIG_DRM_I915=y CONFIG_DRM_I915_KMS=y CONFIG_VIDEO_OUTPUT_CONTROL=y CONFIG_FB=y CONFIG_FB_BOOT_VESA_SUPPORT=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y CONFIG_FB_MODE_HELPERS=y CONFIG_FB_VESA=y CONFIG_BACKLIGHT_LCD_SUPPORT=y CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_VGA_CONSOLE=y CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=640 CONFIG_DUMMY_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_LOGO=y CONFIG_LOGO_LINUX_MONO=y CONFIG_LOGO_LINUX_VGA16=y CONFIG_LOGO_LINUX_CLUT224=y Every other custom/stock kernel boot fine with that Grub2 config. What I would like to have is a single flow of messages on a single console (retaining one screen resolution) from the bootup logo till the login prompt. Does anybody know what I have to tweak to achieve this?

    Read the article

  • How could I embed formatted XML source in WORD documents?

    - by eckes
    I'm writing a documentation with WORD that contains XML source code (whole files) as examples. The way I'm embedding the currently XML is quite cumbersome and doesn't seem to me as really maintainable: I'm finishing the editing of the document in WORD and create a PDF from it using Acrobat next, I open my XML files (2x input files, 2x generated output files) with IE and print them with the PDF printer supplied by Acrobat now, I open up Acrobat Pro and attach the four XML-PDF files to my original document The problem with that work flow for me is that it involves too much manual labor in order to get the documentation done. What I've tried up to now is not really satisfying for me: converting each XML to PDF and appending them like described above opening the XML files with SCiTE, copy as RTF and paste into Word playing around with the LaTeX packages minted, pygments and listings (I could write the docs with LaTeX too) but found some unsolvable problems in each of these packages I'm searching for a way that produces my documentation more automatic. For example embedding the XML files including formatting of IE (which I find quite readable). The files should be included by reference so that I don't have to paste the XML sources manually every time the XML changes.

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • USB-to-Serial showing gibberish at 115200 Baud

    - by Mose
    I've got a serious problem which drives me crazy because I tried everything I could think of. First of all, I made a video: http://youtu.be/boghkuq7L_s but please read the following text for more information, not only view the video! When using a USB-to-Serial interface everything works as long as I don't go beyond 57600 Baud. At higher rates I only get giberish like this: év.­b0JNLYÆÿ¿iëd0U²(kßÞb! ú]/xscB!ï¯!BoXûÿ1ïâÖCÿ6ÌAnè*íÌC)º¿BíÞØ.C.@ÆÃwHJÂs "YE:ñ.èFðÌCÊ÷ÞÄ !x H w6@BtbHJ ̪ Ì6ì H¾a¿bH.">îvy®;f<ßBÌ p­L¨fæH­E ­þ¼MBÞI What makes the problem so strange is, I exchanged every component and the problem still presists. I tried differtent OSes (Ubuntu, WinXP, Win7, OSX 10.7) with 32 and 64 Bit. I tried USB-to-Serial interface from FTDI and Prolific. I tried reading the output from my Raspberry PI and from an Asterisk Appliance. I changed the cables and the wiring. Nothing helped. In the video I made a example with a old Notebook with native COM and put the USB-to-Serial to the same connection as "sniffer" (only Rx and GND connected) to make sure the output and everything is ok as one can see on the native port. The voltage is ok. Settings for both are 115200 Baud, 8 Bit with 1 Stop and no flow control. Native is ok. USB is messed up. I used the newest drivers and double checked all connections. I have no idea what is wrong here. As I couldn't find anyone describing problems like this I question my long experiance in computer science and think I'm doing some completly wrong... Please help :-/

    Read the article

  • Server-side SSH jump hosts

    - by Dan Sosedoff
    Trying to figure out server side SSH jump hosts logic. Current network schema: [Client] <--> [Server A: hostname: a.com] <--> [Server B] [Client] <--> [Server A: hostname: b.com] <--> [Server C] Server A responds to both DNS records. Possible flow: Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically jump user onto Server B with ssh user2@server_b.com. Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically just user onto Server C with ssh user2@server_c.com. In other words, client should be able to connect to the target without performing any local configuration, assuming that we have a stock ssh config. The problem with ssh jumps is that user has to define hosts in local ~/.ssh/config file, which is not acceptable in my case. It needs to be a default sshd behavior. Im aware that you can define a custom command ~/.ssh/authorized_keys on server, but i dont think there is a way to properly detect source hostname where user tries to connect. It is possible at all ?

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • How do I connect to and factory reset a Catalyst 3560 Switch?

    - by Josh
    My company just bought another company. In their server room they had some older hardware, which I would like to repurpose. One of these is a Cisco Switch: C3560G-48TS-S. I found some instructions about this switch here but this is not a guide for a beginner. I have no idea how to connect to this thing to begin running the commands. It says Configure the PC terminal emulation software for 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control. But I can't find anything on how to do this (assuming with telnet?) or even what program to use. I also don't know how to find the IP address of the device to connect to it. My research also says once I get in there, I need to run clear config all Is this the right command? Also, what if I can't get the username and password for these devices? Is there some way to factory reset (my only experience is with devices that have a hardware reset button) EDIT: I should note that when I push the button on the front the three lights blink, which according to the documentation indicated the switch is configured and "not available for express setup"

    Read the article

  • Trying to Understand PLSQL Function

    - by Rachel
    I am new to PLSQL and I have this huge plsql function which am trying to understand and am having hard time understanding the flow and so I would really appreciate if anyone can run me through the big pieces so that I can understand the flow. Guidance would be highly appreciated. FUNCTION monthly_analysis( REGION_ID_P VARCHAR2, COUNTRY_ID_P VARCHAR2 , SUB_REGION_ID_P VARCHAR2 , CUSTOMER_TYPE_ID_P VARCHAR2 , RECEIVED_FROM_DATE_P VARCHAR2 , RECEIVED_TO_DATE_P VARCHAR2, CUSTOMER_ID_P VARCHAR2 , PRIORITY_ID_P VARCHAR2, WORK_GROUP_ID_P VARCHAR2, CITY_ID_P VARCHAR2, USER_ID_P VARCHAR2 ) RETURN AP_ANALYSIS_REPORT_TAB_TYPE pipelined IS with_sql LONG; e_sql LONG; where_sql LONG; group_by_sql LONG; curent_date Date; v_row AP_ANALYSIS_REPORT_ROW_TYPE := AP_ANALYSIS_REPORT_ROW_TYPE( NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ); TYPE rectyp IS REF CURSOR; -- define weak REF CURSOR type rrc_rectyp rectyp; TYPE recordvar IS RECORD( MONTHS VARCHAR2(100), ORDERBY_MONTHS VARCHAR2(100), REQ_RECEIVED NUMBER(9,2), REQ_STILL_OPEN NUMBER(9,2), REQ_AWAIT_ACCEPTANCE NUMBER(9,2), REQ_WITH_ATT NUMBER(9,2), REQ_CLOSED NUMBER(9,2), REQ_CANCELLED NUMBER(9,2) ); res_rec recordvar; BEGIN select sysdate +substr(to_char(systimestamp, 'tzr'),3,1)/24 into curent_date from dual; where_sql := ' AND 1=1 '; IF COUNTRY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.country_id ='|| COUNTRY_ID_P; END IF; IF SUB_REGION_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.SUB_REGION_ID ='|| SUB_REGION_ID_P; END IF; IF CUSTOMER_TYPE_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.CUSTOMER_TYPE_ID ='|| CUSTOMER_TYPE_ID_P; END IF; IF RECEIVED_FROM_DATE_P IS NOT NULL THEN where_sql := where_sql||' AND convert_time(received_date, ''GMT'', ''GMT'') >= convert_time(trunc(to_date('''||RECEIVED_FROM_DATE_P||''',''dd/mm/yyyy HH24:MI:SS'')), ''Europe/Paris'', ''GMT'')'; END IF; IF RECEIVED_TO_DATE_P IS NOT NULL THEN where_sql := where_sql||' AND convert_time(received_date, ''GMT'', ''GMT'') <= convert_time(trunc(to_date('''||RECEIVED_TO_DATE_P||''',''dd/mm/yyyy HH24:MI:SS'')), ''Europe/Paris'', ''GMT'')'; END IF; IF CUSTOMER_ID_P IS NOT NULL THEN where_sql := where_sql||' AND x.CUSTOMER_ID in(select CUSTOMER_ID from lk_customer where upper(CUSTOMER_NAME) like upper('''||CUSTOMER_ID_P||'%''))'; END IF; IF PRIORITY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.PRIORITY_ID ='|| PRIORITY_ID_P; END IF; IF WORK_GROUP_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.WORKGROUP_ID ='|| WORK_GROUP_ID_P; END IF; IF CITY_ID_P IS NOT NULL THEN where_sql := where_sql ||' AND x.CITY_ID = ' || CITY_ID_P; END IF; group_by_sql := ' group by to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''mm/YYYY''),to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''yyyy/mm'')'; with_sql := 'with b AS (select cep_work_item_no from ap_main where req_accept_date is null and ecep_ap_utils.f_business_days(received_date,'''||curent_date||''')>30), e AS (select cep_work_item_no from ap_main where status_id=1 and req_accept_date is not null and stage_ID != 10 and stage_Id !=4 and ecep_ap_utils.f_business_days(received_date,'''||curent_date||''')>30), --f AS (select cep_work_item_no from ap_main where received_date is not null), m AS (select cep_work_item_no from ap_main where received_date is not null and status_id=1), n AS (select cep_work_item_no from ap_main where status_id=2), o AS (select cep_work_item_no from ap_main where status_id=3)'; --e_sql := ' SELECT MONTHS, REQ_RECEIVED,REQ_STILL_OPEN, REQ_AWAIT_ACCEPTANCE, REQ_WITH_ATT from ('; --e_sql := with_sql; e_sql := with_sql||' select to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''mm/YYYY'') MONTHS, to_char(convert_time(received_date, ''GMT'', ''Europe/Paris''),''yyyy/mm'') ORDERBY_MONTHS, count(x.cep_work_item_no) REQ_RECEIVED, count(m.cep_work_item_no) REQ_STILL_OPEN,count(b.cep_work_item_no) REQ_AWAIT_ACCEPTANCE,count(e.cep_work_item_no) REQ_WITH_ATT, count(n.cep_work_item_no) REQ_CLOSED, count(o.cep_work_item_no) REQ_CANCELLED from emea_main x,m,b,e,n,o where x.cep_work_item_no=m.cep_work_item_no(+) and x.cep_work_item_no = b.cep_work_item_no(+) and x.cep_work_item_no=e.cep_work_item_no(+) and x.cep_work_item_no=n.cep_work_item_no(+) and x.cep_work_item_no=o.cep_work_item_no(+) and x.received_date is not null'; e_sql := e_sql|| where_sql||group_by_sql; OPEN rrc_rectyp FOR e_sql; LOOP FETCH rrc_rectyp INTO res_rec; EXIT WHEN rrc_rectyp%NOTFOUND; v_row.MONTHS := res_rec.MONTHS ; v_row.ORDERBY_MONTHS := res_rec.ORDERBY_MONTHS ; v_row.REQ_RECEIVED := res_rec.REQ_RECEIVED; v_row.REQ_STILL_OPEN := res_rec.REQ_STILL_OPEN; v_row.REQ_AWAIT_ACCEPTANCE := res_rec.REQ_AWAIT_ACCEPTANCE; v_row.REQ_WITH_ATT := res_rec.REQ_WITH_ATT; v_row.REQ_CLOSED := res_rec.REQ_CLOSED; v_row.REQ_CANCELLED := res_rec.REQ_CANCELLED; pipe ROW(v_row); END LOOP; RETURN; END monthly_analysis; And would also appreciate if someone can let me know as to what are the important plsql concepts used here so that I can go ahead and understand them in a better way and some small explanation would go long way. As suggested by dcp, i am trying to use debugger, again I have not used it before and so pardon me, here is what am getting: DECLARE REGION_ID_P VARCHAR2(200); COUNTRY_ID_P VARCHAR2(200); SUB_REGION_ID_P VARCHAR2(200); CUSTOMER_TYPE_ID_P VARCHAR2(200); RECEIVED_FROM_DATE_P VARCHAR2(200); RECEIVED_TO_DATE_P VARCHAR2(200); CUSTOMER_ID_P VARCHAR2(200); PRIORITY_ID_P VARCHAR2(200); WORK_GROUP_ID_P VARCHAR2(200); CITY_ID_P VARCHAR2(200); USER_ID_P VARCHAR2(200); v_Return GECEPDEV.AP_ANALYSIS_REPORT_TAB_TYPE; BEGIN REGION_ID_P := NULL; COUNTRY_ID_P := NULL; SUB_REGION_ID_P := NULL; CUSTOMER_TYPE_ID_P := NULL; RECEIVED_FROM_DATE_P := NULL; RECEIVED_TO_DATE_P := NULL; CUSTOMER_ID_P := NULL; PRIORITY_ID_P := NULL; WORK_GROUP_ID_P := NULL; CITY_ID_P := NULL; USER_ID_P := NULL; v_Return := ECEP_AP_REPORTS.MONTHLY_ANALYSIS( REGION_ID_P => REGION_ID_P, COUNTRY_ID_P => COUNTRY_ID_P, SUB_REGION_ID_P => SUB_REGION_ID_P, CUSTOMER_TYPE_ID_P => CUSTOMER_TYPE_ID_P, RECEIVED_FROM_DATE_P => RECEIVED_FROM_DATE_P, RECEIVED_TO_DATE_P => RECEIVED_TO_DATE_P, CUSTOMER_ID_P => CUSTOMER_ID_P, PRIORITY_ID_P => PRIORITY_ID_P, WORK_GROUP_ID_P => WORK_GROUP_ID_P, CITY_ID_P => CITY_ID_P, USER_ID_P => USER_ID_P ); -- Modify the code to output the variable -- DBMS_OUTPUT.PUT_LINE('v_Return = ' || v_Return); END; Can anyone guide me through this query and its goal ?

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >