Search Results

Search found 1666 results on 67 pages for 'practical joke'.

Page 60/67 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • The Koyal Group Info Mag News¦Charged building material could make the renewable grid a reality

    - by Chyler Tilton
    What if your cell phone didn’t come with a battery? Imagine, instead, if the material from which your phone was built was a battery. The promise of strong load-bearing materials that can also work as batteries represents something of a holy grail for engineers. And in a letter published online in Nano Letters last week, a team of researchers from Vanderbilt University describes what it says is a breakthrough in turning that dream into an electrocharged reality. The researchers etched nanopores into silicon layers, which were infused with a polyethylene oxide-ionic liquid composite and coated with an atomically thin layer of carbon. In doing so, they created small but strong supercapacitor battery systems, which stored electricity in a solid electrolyte, instead of using corrosive chemical liquids found in traditional batteries. These supercapacitors could store and release about 98 percent of the energy that was used to charge them, and they held onto their charges even as they were squashed and stretched at pressures up to 44 pounds per square inch. Small pieces of them were even strong enough to hang a laptop from—a big, fat Dell, no less. Although the supercapacitors resemble small charcoal wafers, they could theoretically be molded into just about any shape, including a cell phone’s casing or the chassis of a sedan. They could also be charged—and evacuated of their charge—in less time than is the case for traditional batteries. “We’ve demonstrated, for the first time, the simple proof-of-concept that this can be done,” says Cary Pint, an assistant professor in the university’s mechanical engineering department and one of the authors of the new paper. “Now we can extend this to all kinds of different materials systems to make practical composites with materials specifically tailored to a host of different types of applications. We see this as being just the tip of a very massive iceberg.” Pint says potential applications for such materials would go well beyond “neat tech gadgets,” eventually becoming a “transformational technology” in everything from rocket ships to sedans to home building materials. “These types of systems could range in size from electric powered aircraft all the way down to little tiny flying robots, where adding an extra on-board battery inhibits the potential capability of the system,” Pint says. And they could help the world shift to the intermittencies of renewable energy power grids, where powerful batteries are needed to help keep the lights on when the sun is down or when the wind is not blowing. “Using the materials that make up a home as the native platform for energy storage to complement intermittent resources could also open the door to improve the prospects for solar energy on the U.S. grid,” Pint says. “I personally believe that these types of multifunctional materials are critical to a sustainable electric grid system that integrates solar energy as a key power source.”

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • Firefox: non-Vimperator way to do mouseless browsing?

    - by Peter Mortensen
    Is it possible to do efficient browsing with Firefox using only the keyboard (like in Opera)? By efficient I mean something faster than using TAB - this takes far too long. The arrow keys should be for navigation (in Opera it is Shift + arrow key). It can done with the Vimperator add-on, but isn't there a simpler way? Update 2: the closest to Opera's way is to enable caret navigation (F7 toggles this mode). It doesn't jump between links so it is a little bit slower, but the normal navigation (arrow keys, page up, page down, etc.) works and the focus/caret/cursor follows (in contrast to a text editor for page up/down). And text can be selected and copied like in a text editor. The biggest drawback is that in practice it is necessary to switch in and out of caret mode. And there is no indication of which mode is currently active. Update 1: a work-around (proposed by several but is not really what I am looking for) can be used if 3 settings are changed (to make it practical). After these changes the first few letters of a link text can be typed and that link will selected so pressing Enter will open it. Using the work-around the screen will jump around if it is a long page as it does not restrict itself to the current visible page, but it is usable. First settings change: menu Tools/Options/Advanced/tab General/Accessibility/Search for text when I start typing Turn this option on. Second settings change: set option to only go to links; in address bar enter: about:config followed by Enter. Then: press "I'll be careful, I promise", find the line accessibility.typeaheadfind.linksonly, select it and change the value to True by either hitting Enter or Shift+F10/Toggle (accessibility.typeaheadfind.linksonly is on line 11 when I tried). Third settings change: turn off case-sensitivity. Set accessibility.typeaheadfind.casesensitive to 0 (same procedure as for accessibility.typeaheadfind.linksonly, see above. When Enter is pressed a dialog box will appear with the current value. Type 0 and press Enter). To use: type some part of the link. If there are several possibilities use Ctrl+G (or F3) to jump between them. Use Ctrl+Enter to open in a new tab. Platform: Firefox 3.0.6, Windows XP 64 bit SP2.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Broadcom HT1100 SATA controller not working properly with 1TB drives

    - by Jeff C
    I've been using RHEL distro's for several years and always managed to find the answers until now. I know this is more of a hardware issue, but I've been working on this for over a week and trust Linux and the IT community to help more then HP. I have CentOS 6.3 installed on an HP ProLiant DL145 G3 server with the BroadCom HT1100 IO controller and ServerWorks SATA Controller MMIO BIOS v3.0.0015.6 Firmware. This controller does not support large drives fully. Here's what I've tried and the results; Stock setup - Freezes on the ServerWorks POST screen. Can't even enter CMOS without disconnecting the drives. If I simply disconnect the SATA cables before it gets to the ServerWorks screen and reconnect afterwards I can boot from a CD, USB, PXE fine. However fiddling with cables at ever boot isn't practical. If I enter the BIOS config I can set it to not try booting the drives but leave the controller enabled. This lets me boot normally but the drives are not visible in the OS (live CDs or USB installed). I used method #2 to install and update CentOS. I have the /boot partition on a USB drive (everything else is on the SATA drives in software RAID1) hoping that would work around the issue but I get this Kernel panic - not syncing:Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-279.9.1.el6.x86_6 #1 Call Trace: [<ffffffff814fd6ba>] ? panic+0xa0/0x168 [<ffffffff81070c22>] ? do_exit+0x862/0x870 [<ffffffff8117cdb5>] ? fput+0x25/0x30 [<ffffffff81070c88>] ? do_group_exit+0x58/0xd0 [<ffffffff81070d17>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b panic occured, switching back to text console I'm sure it should be possible to talk to the drives without the BIOS boot check since the BIOS doesn't see them in method #2 either, their disconnected when it checks, but Linux sees them fine. If anyone could help figure out how I would greatly appreciate it! The other possible option I've come across is a complex firmware update. Tyan has a few boards on their website with the HT1100 and a ServerWorks v3.0.0015.7 update which says "adds support for TB drives" in the release notes. If someone could help me get the Tyan SATA firmware into the HP ROM file so I could just reflash that would also be very much appreciated. Thanks for any help you guys can offer!

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

  • Productivity Tips

    - by Brian T. Jackett
    A few months ago during my first end of year review at Microsoft I was doing an assessment of my year.  One of my personal goals to come out of this reflection was to improve my personal productivity.  While I hear many people say “I wish I had more hours in the day so that I could get more done” I feel like that is the wrong approach.  There is an inherent assumption that you are being productive with your time that you already have and thus more time would allow you to be as productive given more time.    Instead of wishing I could add more hours to the day I’ve begun adopting a number of processes or behavior changes in my personal life to make better use of my time with the goal of improving productivity.  The areas of focus are as follows: Focus Processes Tools Personal health Email Note: A number of these topics have spawned from reading Scott Hanselman’s blog posts on productivity, reading of David Allen’s book Getting Things Done, and discussions with friends and coworkers who had great insights into this topic.   Focus Pre-reading / viewing: Overcome your work addiction Millennials paralyzed by choice Its Not What You Read Its What You Ignore (Scott Hanselman video)    I highly recommend Scott Hanselman’s video above and this post before continuing with this article.  It is well worth the 40+ mins price of admission for the video and couple minutes for article.  One key takeaway for me was listing out my activities in an average week and realizing which ones held little or no value to me.  We all have a finite amount of time to work each day.  Do you know how much time and effort you spend on various aspects of your life (family, friends, religion, work, personal happiness, etc.)?  Do your actions and commitments reflect your priorities?    The biggest time consumers with little value for me were time spent on social media services (Twitter and Facebook), playing an MMO video game, and watching TV.  I still check up on Facebook, Twitter, Microsoft internal chat forums, and other services to keep contact with others but I’ve reduced that time significantly.  As for TV I’ve cut the cord and no longer subscribe to cable TV.  Instead I use Netflix, RedBox, and over the air channels but again with reduced time consumption.  With the time I’ve freed up I’m back to working out 2-3 times a week and reading 4 nights a week (both of which I had been neglecting previously).  I’ll mention a few tools for helping measure your time in the Tools section.   Processes    Do not multi-task.  I’ll say it again.  Do not multi-task.  There is no such thing as multi tasking.  The human brain is optimized to work on one thing at a time.  When you are “multi-tasking” you are really doing 2 or more things at less than 100%, usually by a wide margin.  I take pride in my work and when I’m doing something less than 100% the results typically degrade rapidly.    Now there are some ways of bending the rules of physics for this one.  There is the notion of getting a double amount of work done in the same timeframe.  Some examples would be listening to podcasts / watching a movie while working out, using a treadmill as your work desk, or reading while in the bathroom.    Personally I’ve found good results in combining one task that does not require focus (making dinner, playing certain video games, working out) and one task that does (watching a movie, listening to podcasts).  I believe this is related to me being a visual and kinesthetic (using my hands or actually doing it) learner.  I’m terrible with auditory learning.  My fiance and I joke that sometimes we talk and talk to each other but never really hear each other.   Goals / Tasks    Goals can give us direction in life and a sense of accomplishment when we complete them.  Goals can also overwhelm us and give us a sense of failure when we don’t complete them.  I propose that you shift your perspective and not dwell on all of the things that you haven’t gotten done, but focus instead on regularly setting measureable goals that are within reason of accomplishing.    At the end of each time frame have a retrospective to review your progress.  Do not feel guilty about what you did not accomplish.  Feel proud of what you did accomplish and readjust your goals for the next time frame to more attainable goals.  Here is a sample schedule I’ve seen proposed by some.  I have not consistently set goals for each timeframe, but I do typically set 3 small goals a day (this blog post is #2 for today). Each day set 3 small goals Each week set 3 medium goals Each month set 1 large goal Each year set 2 very large goals   Tools    Tools are an extension of our human body.  They help us extend beyond what we can physically and mentally do.  Below are some tools I use almost daily or have found useful as of late. Disclaimer: I am not getting endorsed to promote any of these products.  I just happen to like them and find them useful. Instapaper – Save internet links for reading later.  There are many tools like this but I’ve found this to be a great one.  There is even a “read it later” JavaScript button you can add to your browser so when you navigate to a site it will then add this to your list. Stacks for Instapaper – A Windows Phone 7 app for reading my Instapaper articles on the go.  It does require a subscription to Instapaper (nominal $3 every three months) but is easily worth the cost.  Alternatively you can set up your Kindle to sync with Instapaper easily but I haven’t done so. SlapDash Podcast – Apps for Windows Phone and  Windows 8 (possibly other platforms) to sync podcast viewing / listening across multiple devices.  Now that I have my Surface RT device (which I love) this is making my consumption easier to manage. Feed Reader – Simple Windows 8 app for quickly catching up on my RSS feeds.  I used to have hundreds of unread items all the time.  Now I’m down to 20-50 regularly and it is much easier and faster to consume on my Surface RT.  There is also a free version (which I use) and I can’t see much different between the free and paid versions currently. Rescue Time – Have you ever wondered how much time you’ve spent on websites vs. email vs. “doing work”?  This service tracks your computer actions and then lets you report on them.  This can help you quantitatively identify areas where your actions are not in line with your priorities. PowerShell – Windows automation tool.  It is now built into every client and server OS.  This tool has saved me days (and I mean the full 24 hrs worth) of time and effort in the past year alone.  If you haven’t started learning PowerShell and you administrating any Windows OS or server product you need to start today. Various blogging tools – I wrote a post a couple years ago called How I Blog about my blogging process and tools used.  Almost all of it still applies today.   Personal Health    Some of these may be common sense or debatable, but I’ve found them to help prioritize my daily activities. Get plenty of sleep on a regular basis.  Sacrificing sleep too many nights a week negatively impacts your cognition, attitude, and overall health. Exercise at least three days.  Exercise could be lifting weights, taking the stairs up multiple flights of stairs, walking for 20 mins, or a number of other "non-traditional” activities.  I find that regular exercise helps with sleep and improves my overall attitude. Eat a well balanced diet.  Too much sugar, caffeine, junk food, etc. are not good for your body.  This is not a matter of losing weight but taking care of your body and helping you perform at your peak potential.   Email    Email can be one of the biggest time consumers (i.e. waster) if you aren’t careful. Time box your email usage.  Set a meeting invite for yourself if necessary to limit how much time you spend checking email. Use rules to prioritize your email.  Email from external customers, my manager, or include me directly on the To line go into my inbox.  Everything else goes a level down and I have 30+ rules to further sort it, mostly distribution lists. Use keyboard shortcuts (when available).  I use Outlook for my primary email and am constantly hitting Alt + S to send, Ctrl + 1 for my inbox, Ctrl + 2 for my calendar, Space / Tab / Shift + Tab to mark items as read, and a number of other useful commands.  Learn them and you’ll see your speed getting through emails increase. Keep emails short.  No one Few people like reading through long emails.  The first line should state exactly why you are sending the email followed by a 3-4 lines to support it.  Anything longer might be better suited as a phone call or in person discussion.   Conclusion    In this post I walked through various tips and tricks I’ve found for improving personal productivity.  It is a mix of re-focusing on the things that matter, using tools to assist in your efforts, and cutting out actions that are not aligned with your priorities.  I originally had a whole section on keyboard shortcuts, but with my recent purchase of the Surface RT I’m finding that touch gestures have replaced numerous keyboard commands that I used to need.  I see a big future in touch enabled devices.  Hopefully some of these tips help you out.  If you have any tools, tips, or ideas you would like to share feel free to add in the comments section.         -Frog Out   Links Scott Hanselman Productivity posts http://www.hanselman.com/blog/CategoryView.aspx?category=Productivity Overcome your work addiction http://blogs.hbr.org/hbsfaculty/2012/05/overcome-your-work-addiction.html?awid=5512355740280659420-3271   Millennials paralyzed by choice http://priyaparker.com/blog/millennials-paralyzed-by-choice   Its Not What You Read Its What You Ignore (video) http://www.hanselman.com/blog/ItsNotWhatYouReadItsWhatYouIgnoreVideoOfScottHanselmansPersonalProductivityTips.aspx   Cutting the cord – Jeff Blankenburg http://www.jeffblankenburg.com/2011/04/06/cutting-the-cord/   Building a sitting standing desk – Eric Harlan http://www.ericharlan.com/Everything_Else/building-a-sitting-standing-desk-a229.html   Instapaper http://www.instapaper.com/u   Stacks for Instapaper http://www.stacksforinstapaper.com/   Slapdash Podcast Windows Phone -  http://www.windowsphone.com/en-us/store/app/slapdash-podcasts/90e8b121-080b-e011-9264-00237de2db9e Windows 8 - http://apps.microsoft.com/webpdp/en-us/app/slapdash-podcasts/0c62e66a-f2e4-4403-af88-3430a821741e/m/ROW   Feed Reader http://apps.microsoft.com/webpdp/en-us/app/feed-reader/d03199c9-8e08-469a-bda1-7963099840cc/m/ROW   Rescue Time http://www.rescuetime.com/   PowerShell Script Center http://technet.microsoft.com/en-us/scriptcenter/bb410849.aspx

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers. Originally published on blogs.oracle.com/javaone.

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers.

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • ASP.NET MVC 3 embrace dynamic type - CSDN.NET - CSDN Software Development Channel

    - by user559071
    About a decade ago, Microsoft will all bet on the WebForms and static types. With the complete package from scattered to the continuous development, and now almost every page can be viewed as its own procedure. Subsequent years, the industry continued to move in another direction, love is better than separation package, better than the late binding early binding to the idea. This leads to two very interesting questions. The first is the problem of terminology. Consider the original Smalltalk MVC model, view and controller is not only tightly coupled together, and usually in pairs. Most of the framework is that Microsoft, including the classic VB, WinForms, WebForms, WPF and Silverlight, they both use the code behind file to store the controller logic. But said "MVC" usually refers to the view and controller are loosely coupled framework. This is especially true for the Web framework, HTML form submission mechanism allows any views submitted to any of the controller. Since this article was mainly talking about Web technologies, so we need to use the modern definition. The second question is "If you're Microsoft, how to change orbit without causing too much pressure to the developer?" So far, the answer is: new releases each year, until the developers meet up. ASP.NET MVC's first product was released last March. Released in March this year, ASP.NET MVC 2.0. 3.0 RC 2 is currently in phase, expected to be released next March. December 10, Microsoft released ASP.NET MVC 3.0 Release Candidate 2. RC 2 is built on top of Microsoft's commitment to the jQuery: The default project template into jQuery 1.4.4, jQuery Validation 1.7 and jQuery UI. Although people think that Microsoft will focus shifted away from server-side controls to be a joke, but the introduction of Microsoft's jQuery UI is that this is the real thing. For those worried about the scalability of the developers, there are many excellent control can replace the session state. With SessionState property, you can tell the controller session state is read-only, read-write, or can be completely ignored in the. This site is no single server, but if a server needs to access another server session state, then this approach can provide a great help. MVC 3 contains Razor view engine. By default, the engine will be encoded HTML output, so that we can easily output on the screen the text of the original. HTML injection attacks even without the risk of encoded text can not easily prevent the page rendering. For many C # developers in the end do what is most shocking that MVC 3 for the controller and view and embrace the dynamic type. ViewBag property will open a dynamic object, developers can run on top of the object to add attributes. In general, it is used to send the view from the controller non-mode data. Scott Guthrie provides state of the sample contains text (such as the current time) and used to assemble the list box entries. Asked Link: http://www.infoq.com/cn/news/2010/12/ASPNET-MVC-3-RC-2; jsessionid = 3561C3B7957F1FB97848950809AD9483

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • ASP.NET MVC 3: Razor’s @: and <text> syntax

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (today) In today’s post I’m going to discuss two useful syntactical features of the new Razor view-engine – the @: and <text> syntax support. Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post.  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a list of products: When run, it generates output like:   One of the techniques that Razor uses to implicitly identify when a code block ends is to look for tag/element content to denote the beginning of a content region.  For example, in the code snippet above Razor automatically treated the inner <li></li> block within our foreach loop as an HTML content block because it saw the opening <li> tag sequence and knew that it couldn’t be valid C#.  This particular technique – using tags to identify content blocks within code – is one of the key ingredients that makes Razor so clean and productive with scenarios involving HTML creation. Using @: to explicitly indicate the start of content Not all content container blocks start with a tag element tag, though, and there are scenarios where the Razor parser can’t implicitly detect a content block. Razor addresses this by enabling you to explicitly indicate the beginning of a line of content by using the @: character sequence within a code block.  The @: sequence indicates that the line of content that follows should be treated as a content block: As a more practical example, the below snippet demonstrates how we could output a “(Out of Stock!)” message next to our product name if the product is out of stock: Because I am not wrapping the (Out of Stock!) message in an HTML tag element, Razor can’t implicitly determine that the content within the @if block is the start of a content block.  We are using the @: character sequence to explicitly indicate that this line within our code block should be treated as content. Using Code Nuggets within @: content blocks In addition to outputting static content, you can also have code nuggets embedded within a content block that is initiated using a @: character sequence.  For example, we have two @: sequences in the code snippet below: Notice how within the second @: sequence we are emitting the number of units left within the content block (e.g. - “(Only 3 left!”). We are doing this by embedding a @p.UnitsInStock code nugget within the line of content. Multiple Lines of Content Razor makes it easy to have multiple lines of content wrapped in an HTML element.  For example, below the inner content of our @if container is wrapped in an HTML <p> element – which will cause Razor to treat it as content: For scenarios where the multiple lines of content are not wrapped by an outer HTML element, you can use multiple @: sequences: Alternatively, Razor also allows you to use a <text> element to explicitly identify content: The <text> tag is an element that is treated specially by Razor. It causes Razor to interpret the inner contents of the <text> block as content, and to not render the containing <text> tag element (meaning only the inner contents of the <text> element will be rendered – the tag itself will not).  This makes it convenient when you want to render multi-line content blocks that are not wrapped by an HTML element.  The <text> element can also optionally be used to denote single-lines of content, if you prefer it to the more concise @: sequence: The above code will render the same output as the @: version we looked at earlier.  Razor will automatically omit the <text> wrapping element from the output and just render the content within it.  Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s smart detection of <tag> elements to identify the beginning of content regions is one of the reasons that the Razor approach works so well with HTML generation scenarios, and it enables you to avoid having to explicitly mark the beginning/ending of content regions in about 95% of if/else and foreach scenarios. Razor’s @: and <text> syntax can then be used for scenarios where you want to avoid using an HTML element within a code container block, and need to more explicitly denote a content region. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • EBS Techstack Sessions at OAUG/Collaborate 2010

    - by Steven Chan
    We have a large contingent of E-Business Suite Applications Technology Group staff rolling out to the OAUG/Collaborate 2010 conference in Las Vegas new week.  Our Applications Technology Group staff will be appearing as guest speakers or full-speakers at the following E-Business Suite technology stack related sessions:Database Special Interest GroupSunday, April 18, 11:00 AM, Breakers FSIG Leaders:  Michael Brown, Colibri; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanCovering database upcoming and past desupport dates, and database support policies as they apply to E-Business Suite environments, general Q&A E-Business Suite Technology Stack Special Interest GroupSunday, April 18, 3:00 PM, Breakers FSIG Leaders:  Elke Phelps, Paul Jackson, HumanaGuest Speaker:  Steven ChanCovering the latest EBS technology stack certifications, roadmap, desupport noticesupgrade options for Discoverer, OID, SSO, Portal, general Q&A E-Business Suite Applications Technology Roadmap & VisionMonday, April 19, 8:00 AM, South Seas GOracle Speaker:  Uma PrabhalaLatest developments for SOA, AOL, OAF, Web ADI, SES, AMP, ACMP, security, and other technologies Oracle E-Business Suite Applications Strategy and General Manager UpdateMonday, April 19, 2:30 PM, Mandalay Bay Ballroom DOracle Speaker:  Cliff GodwinUpdate on the entire Oracle E-Business Suite product line. The session covers the value delivered by the current release of Oracle E-Business Suite applications, the momentum, and how Oracle E-Business Suite applications integrate into Oracle's overall applications strategy 10 Things You Can Do Today to Prepare for the Next Generation ApplicationsTuesday, April 20, 8:00 AM, South Seas FOracle Speaker:  Nadia Bendjedou"Common sense" and "practical" steps that can be taken today to increase the value of your Oracle Applications (E-Business Suite, PeopleSoft, Siebel, and JDE) investments by using the latest Oracle solutions and technologiesReducing TCO using Oracle E-Business Suite Management PacksTuesday, April 20, 10:30 AM, South Seas EOracle Speaker:  Angelo RosadoLearn how you can reduce the Total Cost of Ownership by implementing Application Management Pack (AMP) and Application Change Management Pack (ACP) for E-Business Suite 11i, R12, R12.1. AMP is Oracle's next generation system manageability product offering that provides a centralized platform to manage and maintain EBS. ACP is Oracle's offering to monitor and manage E-Business Suite changes in the areas of E-Business Suite Customizations, Patches and Functional Setups. E-Business Suite Upgrade Special Interest GroupTuesday, April 20, 3:15 PM, South Seas ESIG Leaders:  John Stouffer; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanParticipating in general Q&A E-Business Suite Technology Essentials: Using the Latest Oracle Technologies with E-Business Suite Wednesday, April 21, 8:00 AM, South Seas HOracle Speaker:  Lisa ParekhOracle continues to build new functionality into the Oracle Database, Fusion Middleware, and Enterprise Manager. Come see how you can enhance the value of E-Business Suite for your users and lower your costs of ownership by utilizing the latest features of these Oracle technologies with E-Business Suite. Learn about the latest advanced E-Business Suite topologies and features, including new options for security, performance, third-party integration, SOA, virtualization, clouds, systems management, and much more How to Leverage the New E-Business Suite R12.1 Solutions Without Upgrading your 11.5.10 EnvironmentWednesday, April 21, 10:30 AMOracle Speaker:  Nadia Bendjedou, South Seas ELearn how you can use the latest E-Business Suite 12.1 standalone solutions without upgrading from your E-Business Suite 11.5.10 environment Web 2.0 User Experience and Oracle Fusion Middleware Integration with Oracle E-Business SuiteWednesday, April 21, 4:00 PM, South Seas FOracle Speaker:  Padmaprabodh AmbaleSee the next generation Oracle E-Business Suite OA framework improvements that will provide new rich interactions in components such as LOV, Tables and Attachments.  See  new components like the Rich Container that allows any Web 2.0 content like Flash or OBIEE to be embedded in OA Framework pages. Advanced Technology Deployment Architectures for E-Business Suite Wednesday, April 21, 2:15 PM, South Seas EOracle Speaker:  Steven ChanLearn how to take advantage of the latest version of Oracle Fusion Middleware with Oracle E-Business Suite. Learn how to utilize identity management systems and LDAP directories. In addition, come to this session for answers about advanced network deployments involving reverse proxy servers, load balancers, and DMZ's, and to see how you can take benefit from virtualization and new system management capabilities. Upgrading to Oracle E-Business Suite 12.1 - Best PracticesThursday, April 22, 11:00 AM, South Seas EOracle Speaker:  Lester Gutierrez, Udayan ParvateFundamental of upgrading to Release 12.1, which includes the technology stack components and differences, the upgrade path from various releases of Oracle E-Business Suite, upgrade steps, monitoring the upgrade, hints and tips for minimizing downtime and upgrade best practices for making the upgrade to Release 12.1 a success.  We look forward to seeing you there!

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >