Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 134/188 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • PCI configuration method error (Linux Kernel)

    - by user326580
    (I'm not sure if here is the best place for that question, so I will be pleased if anyone suggests me a more proper forum for that.) I'm trying to install Ubuntu 12.04.4 in a netbook (from an usb), but the kernel stops very early in initialization process. After two days of research, I've found that it boots with the parameter pci=conf2 but not with the default conf1 method. Nevertheless, after kernel boot, it seems that Ubuntu can't find usb devices and I'm not be able to install it. Trying with Debian, its a graphic installer and I found that the mouse isn't working neither.I think pci devices are not working. I tried about 50% of kernel pci boot options in the kernel-parameters file (in conjunction with the implicit default conf1) without luck. Any suggestions? PS: The problem is the same with kernel 2.6 or 3. (In Spanish) No estoy seguro si éste es el mejor lugar para esta pregunta, por lo cual estaré encantado si alguno me sugiere un mejor lugar para ella. Estoy intentando instalar Ubuntu 12.04.4 en una netbook (desde un usb), pero el kernel se detiene muy temprano en la inicialización. Después de dos días de investigar, encontré que arranca con el parámetro pci=conf2 pero no con método default conf1. Sin embargo después de que el kernel arranca, parece que Ubuntu no logra encontrar los dispositivos usb y no puedo instalar el sistema. Intentando con Debian y su instalador gráfico, encontré que el ratón tampoco funcionaba, así que pienso que los dispositivos pci no están funcionando. Intenté con aproximadamente el 50% de las opciones de arranque del kernel para pci (en conjunción con el método implícito conf1) sin suerte. Alguna idea? PS: El problema es el mismo con el kernel 2.6 o 3.

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • iPhone 3G can't see any WiFi networks, ever

    - by torbengb
    I've got WiFi turned on in the settings, and ask before connect turned off. My iPhone 3G should see several WiFi networks, but it lists none. It also does not list my own network, which my computers see just fine. It has worked earlier but stopped working recently (possibly because of or at the same time of other trouble (which a restore solved)). The iPhone is not jailbroken. The SSID is not hidden, uses WPA2. It also finds no WiFi networks when I'm at a friend's house. His iPhone 2G sees several WiFi networks, including his own. When I use the manual entry method, specifying my home SSID and the proper WPA2 passkey, then click join, the iPhone says couldn't find that network. Same at my friend's place, with his SSID and passkey. I've just backed up my iPhone, then restored it, to see if refreshing the firmware would help. It didn't change anything. Is my iPhone broken? How can I fix this?

    Read the article

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • Kerberos issues after new server of same name joined to domain

    - by MentalBlock
    Environment: Windows Server 2012, 2 Domain Controllers, 1 domain. A server called Sharepoint1 was joined to the domain (running Sharepoint 2013 using NTLM). The fresh install for Sharepoint1 (OS and Sharepoint) is performed and set up for Kerberos and joined to the domain using the same name. Two SPNs added for HTTP/sharepoint1 and HTTP/sharepoint1.somedomain.net for account SPFarm. Active Directory shows a single, non-duplicate computer account with a create date of the first server and a modify date of the second server creation. A separate server also on the domain has the server added to All Servers in Server Manager. This server shows a local error in the events exactly like This from Technet (Kerberos error 4 - KRB_AP_ERR_MODIFIED). Question: Can someone help me understand if the problem is: The computer account is still the old account and causing a Kerberos ticket mismatch (granted some housekeeping in AD might have prevented this) (In my limited understanding of Kerberos and SPNs) that the SPFarm account used for the SPNs is somehow mismatched with HTTP calls made by the remote server management tools services in Windows Server 2012 Something completely different? I am leaning towards the first one, since I tested the same SPNs on another server and it didn't seem to cause the same issue. If this is the case, can it be easily and safely repaired? Is there a proper way to either reset the account or better yet, delete and re-add the account? Although it sounds simple enough with some powershell or clicking around in AD Users and Computers, I am uncertain what impact this might have on an existing server, particularly one running SharePoint. What is the safest and simplest way to proceed? Thanks!

    Read the article

  • Microsoft Server 2003 Explorer shows duplicate local shares

    - by user52167
    Hi folks, I am new here and I could really use some advice please. I am having a problem with our file server. When I try to browse the shared folders using explorer, several of the shared folders all appear to have the same name. Whenever I attempt to rename one of the affected folders, all the affected folders name also change. Our File Server is Windows Server 2003 R2. I am logged on directly to the server using remote desktop. When I open the folder all is as it should be, the proper content is there and the address bar displays the correct folder name and path. The share names are correct, so everything that needs to access the shared folder/files can do so. Also when I browse to the folder using the command-line all it as it should be there too. The only issue seems to be the incorrect display name when browsing using explorer. Can anyone offer any advice or help as to how to resolve this issue please? It would be most appreciated. Thanks

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • Windows 7 not booting after failed SRT (SSD caching) install

    - by david
    This is a fairly new computer, only about a month old. i7 2700k, z68 motherboard, with a 1.5tb WD black HD, and a 128gb crucial M4 ssd. I followed the instructions for setting up ssd caching, the SATA controller was set to RAID, I installed the intel software and enabled acceleration and it said everything went fine. But when I went to reboot, I received the lovely "Reboot and Select proper Boot device" error message. I checked the bios, and it was booting from the correct HD (I tried the only other option anyway just in case, it was the ~50 odd gb of unformatted space left on the SSD) AFter that I entered the raid until (ctrl-i at boot) and removed the acceleration and deleted the raid array (because it was being used as a cache this was non destructive) Still no boot. So I reinstalled win7 directly on the SSD, booted, and checked the HDD to make sure it hadn't been wiped. It hadn't, all the files were still there, including all the windows stuff. I backed up my data to an external drive just in case, but I'd really like to get this install booting again. I trawled the webs a bit, and have tried entering recovery mode and using the bootrec.exe and bootsect.exe to fix it, but to be honest I'm not sure what I'm doing with those. My question is basically: How do I make my harddrive bootable again?

    Read the article

  • converting apache rewrite rules to nginx for xenforo

    - by nick
    Hi all, I am migrating some forums from vbulletin 3.8.x to xenforo, and trying to keep my old link structure alive. Basically, XF provides some php files that I can redirect the old url style to and it handles the proper 301 redirection. Regardless of that end, I am having difficulty rewriting the rules which I can only find defined in apache's rewrite style: RewriteRule [\d]+-[^/]+/.+-([\d]+)/([\d]+)/ showthread.php?t=$1&page=$2 [NC,L] RewriteRule [\d]+-[^/]+/.+-([\d]+)/ showthread.php?t=$1 [NC,L] RewriteRule ([\d]+)-[^/]+/([\d]+)/ forumdisplay.php?f=$1&page=$2 [NC,L] RewriteRule ([\d]+)-[^/]+/ forumdisplay.php?f=$1 [NC,L] I have been experimenting and thought this should work, but obviously not: if (!-e $request_filename) { rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/([0-9])/ /showthread.php?t=$1&page=$2 last; rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/ /showthread.php?t=$1 last; rewrite ([0-9])-[0-9a-zA-Z\-]/([0-9])/ /forumdisplay.php?f=$1&page=$2 last; rewrite ([0-9])-[0-9a-zA-Z\-]/ /forumdisplay.php?f=$1 last; rewrite ^(.*)$ /index.php last; } old vB showthread format: website.tld/233-website-issues-requests/wiki-down-73789/ new XF showthread format: website.tld/threads/the-wiki-is-down.65509/ old vB forumdisplay format: website.tld/233-website-issues-requests/ new XF forumdisplay format: website.tld/forums/website-issues-and-requests.253/

    Read the article

  • Is it possible to command a common router without using the web interface?

    - by MDeSchaepmeester
    Some background The internet arrangement in my student home is really weird. There is one ethernet outlet and several wifi hotspots. Either way requires a login through a web site to get internet access. This is annoying as each device needs to login seperately and with a PS3 for example, it is impossible to get connected at all since the web login procedure doesn't work. Therefore I have installed a D-Link DIR-635 router which is connected to the ethernet outlet. It has DHCP enabled so it uses NAT, but whatever it is connected to also uses NAT and I've read this should not work. A fellow student tried it with an Apple Airport but that keeps giving errors related to NAT after NAT. Anyway my setup does work so bonus points if you can clarify this. I need to login to the web site I mentioned earlier with any device, after which all devices in my LAN have connectivity. This is great. Except... In short From time to time, I lose internet connectivity and my D-Link DIR-635 router needs to do a DHCP renew. I can do this via the web interface but my life would be easier if I could just run a cmd file which tells my router to do this without all the hassle. This would setup a connection to my router and execute the proper command. I have tried googling but couldn't find much helpful stuff.

    Read the article

  • hg clone has stopped working on my Vista box

    - by vkraemer
    I have a Windows Vista machine that has been connecting to http://hg.netbeans.org productively for awhile... until recently. Lately, when I attempt to pull or clone, the update appears to stall... I see the following messages on the screen when I attempt to clone: destination directory: web-main requesting all changes adding changesets And then... nothing happens. I have opened the Task Manager and there doesn't appear to be any significant network activity for HOURS. I can contact the server with FireFox and see the proper output. I can clone from the repo with Solaris and/or Mac OS X... so the issue doesn't appear to be at the 'other end'. I had been running a fairly old version of Mercurial before this started happening. After it started happening, I upgraded to Mercurial 1.5.2.. which did not help resolve the issue at all. What are the likely causes and work-arounds for this?

    Read the article

  • Server load increases by lot of httpd request with same PID

    - by user3740955
    I can see that my server load increases to more than 200-300 range. Before 1 week the maximum load was around 20-25. In top and ps -ef i can see a lot of httpd threads and the PPID of most of the httpd request are of the same PID. When i verified this the parent process ID is of root. Please let me know how i can reduce the server load. I have searched a lot for this but not able to find out a proper solution for this. Please let me know. Please see below a part of the top output. apache 29698 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29700 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29701 2062 10 16:54 ? 00:00:02 /usr/sbin/httpd apache 29702 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29703 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29705 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29706 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29707 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29708 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29709 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29710 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29711 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29712 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd Server version: Apache/2.2.3

    Read the article

  • Keyboard shortcut for moving a window to another screen

    - by wcoenen
    When working with two (or more screens), a common problem is that launched applications appear on the "wrong" screen. I especially find this annoying when launching a text editor from the command line, because I have to leave the home row with my right hand in order to drag the window to the "right" screen before I can continue typing. Is it possible to define a keyboard shortcut which moves the current application to the other/next screen? Edit: I'm using Windows XP, but it's good to know that the feature already exists in Windows 7. Edit2: I went for the autohotkey script. This adaptation works for me: #q:: WinGetPos, winx, winy,,, A WinGet, mm, MinMax, A WinRestore, A If (winx > 1270) { newx := winx-1270 OutputDebug, Moving left from %winx% to %newx% } else { newx := winx+1270 OutputDebug, Moving right from %winx% to %newx% } WinMove, A,, newx, winy if mm=1 WinMaximize, A Return I did have to make use of the OutputDebug statements and dbgview to discover the proper threshold value 1270 for moving left or right. The exact threshold is especially important when moving maximized windows to the left.

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • Notepad++ incorrect syntax highlithing?

    - by user360919
    So I want to build a XHTML 1.0 Strict based website. Using Notepad++ for syntax highlighting came as an idea to me. But when I tried to put the XML declaration (as stated in the spec, proper XHTML pages should use a XML declaration and be served as application/xhtml+xml) I can't get the entire document highlighted propperly. Here is the code I used for a basic page: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us" lang="en-us"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <title>Page</title> <script type="application/javascript"> alert("A perfectly valid xHTML page..."); </script> <style type="text/css"> #test { text-align: center; } </style> </head> <body> <h1 id="test">TEST</h1> </body> </html> Paste this in Notepad++ and you'll see that it won't highlight the code between <script type="application/javascript"> and </script> (it renders its background white) if language is set to XML. If I set the language to HTML, then the script gets correctly highlighted but the XML declaration is not. What to do? How to make a hybrid language - combination of XML and HTML?

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Printer irregularly producing garbage output

    - by John Gardeniers
    Every now and then instead of getting the proper output we get numerous pages, mostly with just a single line, of output which appears to be the raw PCL. My theory is that this happens when the first byte or two of the document is somehow not received by the printer, which then doesn't know how to interpret the rest and does it's best by spitting it out as text. This is a problem I've seen many times over the years but has been popping up more often since we upgraded to Win 7 64 bit, which introduced a number of headaches because of the HP lack of real support for 64 bits. It also appears to happen most often when printing PDF files. We have tried several different PDF readers in addition to Adobe's own but that hasn't helped. While we mainly use HP printers, and the problem is not limited to any particular model, I've also seen it happen on other brands, albeit to a lesser extent. I've also been unable to discern a difference between printers used via a print server or those connected directly by IP address. It also happens to USB attached printers. Because of the erratic nature of this problem there is precious little I can think of to try and debug it, so I'm after any ideas that might help to eliminate it.

    Read the article

  • How to block access to addresses outside network (internet)

    - by devnull
    I have a homeserver, that is now connected to the internet with an own network device (ath0 - 192.168.1.x). It also has one more network interface (eth0 - 192.168.0.x). Soon I will get a second internet line that will be connected the second network. The server then has both networks with different internet lines available, but i only want it to connect to the internet on the old ath0 interface - not the new eth0 (192.168.0.x). Background of that constellation is that the new line has a volume-limit in traffic - the old hasn't and i need the new line for all mobile devices and laptops. The devices should be able to use the new network to connect to the internet and the server. The homeserver is a debian 6 with iptables and some already written rules for it. I need now a rule to block all outgoing internet access on the eth0 interface - i guess it could be something with --target != 192.168.0.0 but i did not succeed in finding the proper solution. Edit: found the solution: iptables -A OUTPUT -o eth0 -d 192.168.0.0/24 -m state --state NEW,ESTABLISHED -j ACCEPT With that setting, all traffic that uses the eth0 interface is only allowed if the destination is inside the network 192.168.0.x - all other traffic is denied .

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >