Search Results

Search found 27151 results on 1087 pages for 'end to end'.

Page 575/1087 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • #1 O’Reilly eBook for 2010

    - by Jan Goyvaerts
    The year-end issue of O’Reilly’s author newsletter discussed the trends O’Reilly has been seeing the past few years, and their predictions for 2011. The key trend is that digital is now more than ever poised to take over print: Our digitally distributed products have grown from 18.36% of our publishing mix in 2009 to 28.09% of our mix in 2010. What is more impressive is that our digitally distributed products have produced more than double the revenue that has been lost with the decline of print. I think this is important because some say that digital cannibalizes print products. Our data indicates the contrary, as print is declining much more slowly than digital is growing. I think we may be seeing developers purchasing a print book, and then purchasing the electronic editions to search and copying code from, as the incremental cost for digital is more than reasonable. My own book seems to be leading this trend. Thanks to everyone who purchased it! And the five bestselling O’Reilly ebook products for 2010: 1) Regular Expressions Cookbook, 2) jQuery Cookbook, 3) Learning Python, 4) HTML5: Up and Running, and 5) JavaScript Cookbook. I think it’s interesting that the top five ebooks are code-intensive books. They’re great products for search and code reuse. It’s also interesting that none of the top 5 ebooks made the top 5 of print books.

    Read the article

  • C# 4.0 in a Nutshell, Fourth Edition

    - by outcoldman
    Just became a lucky owner of this book C# IN A NUTSHELL 4th edition. This is a fourth edition of this book’s series. I saw previous third edition of this book, we presented it on one of our events at Yaroslavl State University, but that book was a Russian translated version and published in Russia, this is was bad side of that book – all books at Russia printed on really bad paper. I should say that I didn’t read this book by end, but already I was surprised. Why? Why I heard a lot about Richter CLR via C# (English version of 3rd edition of this book I already have, and this book are waiting my attention), and just a few words about C# IN A NUTSHELL, at least in my sphere. I just listen once about this book at one of the podcast of Alt.Net group, and this words was Richter it is really good book, and C# IN A NUTSHELL it is a good handbook. My opinion is - you should read Richter if you want to develop with .NET. But if you want to develop on .NET with C# you should read C# IN A NUTSHELL too. Read more...

    Read the article

  • Rendering with Direct3D

    - by Jamie
    Hi, I'm slightly confused about how Direct3D rendering works. Basically, as long as I render to one surface, everything is fine. But when I try rendering to multiple surfaces, it seems like everything is still rendered to one surface. I think there's something wrong with my calls. For each update cycle this is what I do 1. device-BeginScene() 2. sprite-Begin(...) ... A bunch of GetRenderTarget to store the old render target, then SetRenderTarget to set a new surface, and then things like CreateVertexBuffer, SetTexture, etc to draw on the new render target. Then resetting to the old render target. sprite-Draw([the back buffer]) (the back buffer is actually another surface, not the actual back buffer. But here it is being drawn onto the actual back buffer, I think) sprite-End() device-EndScene() device-Present(...) Also, it seems like if I mix sprite drawing and non-sprite drawing onto a surface, that first one set of render commands is executed and then the other set, rather than in order by when each command was called. If anyone could shed light on any of this, it would be much appreciated.

    Read the article

  • On contract work, obligations to said contract, and looking out for yourself…

    - by jlnorsworthy
    Without boring you all with details, my last two contract assignments were cut short; I was given 3 days notice on one, and 4 weeks notice on the other. Neither of these were due to performance – they both basically came down budget issues. On my second contract, I got the feeling that I may not have been a great place to stay for the duration of my contract. Because of money/time spent getting me in the door, and the possible negative effect of my employer/recruiter, I decided to stay at least for a few months (and start looking several weeks before the end of my supposedly “extendable” contract). These experiences have left me a little wary of contract work. It seems that if I land a bad contract, that my recruiter would take a hit (reputation or otherwise) if I quickly found another job. But on the other hand, the client company won’t think twice of ending the contract early for any reason. I know that the counter argument to this is “maybe your recruiter shouldn’t have put you into a crappy assignment”… either way, it seems that since I am relying on him to provide me with work, that I should try to not damage his reputation with client companies. I’m basically brand new to contracting (these were my first two contracts) so these concerns are new to me. TLDR: Is contract work, by its very nature, largely unstable? Am I worried too much about my recruiter? Should I be quicker to start looking for a new job even after just weeks at a new company (when the environment seems unstable)? If so, do I look through my recruiter or just find another position by any means necessary?

    Read the article

  • Finetuning movement based on gradual rotation towards a target

    - by A.B.
    I have an object which moves towards a target destination by gradually adjusting its facing while moving forwards. If the target destination is in a "blind spot", then the object is incapable of reaching it. This problem is ilustrated in the picture below. When the arrow is ordered to move to point A, it will only end up circling around it (following the red circle) because it is not able to adjust its rotation quickly enough. I'm interested in a solution where the movement speed is multiplied by a number from 0.1 to 1 in proportion to necessity. The problem is, how do I calculate whether it is necessary in the first place? How do I calculate an appropriate multiplier that is neither too small nor too large? void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto movement_distance = distanceBetweenPoints(position, destination); desired_rotation = angleBetweenPoints(position, destination); /// Check whether rotation should be adjusted if (rotation != desired_rotation) { /// Check whether the object can achieve the desired rotation within the next adjustment of its rotation if (Radian::isWithinDistance(rotation, desired_rotation, rotation_speed)) { rotation = desired_rotation; } else { /// Determine whether to increment or decrement rotation in order to achieve desired rotation if (Radian::convert(desired_rotation - rotation) > 0) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } } } if (movement_distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); }

    Read the article

  • Entity Framework &amp; Transactions

    - by Sudheer Kumar
    There are many instances we might have to use transactions to maintain data consistency. With Entity Framework, it is a little different conceptually. Case 1 – Transaction b/w multiple SaveChanges(): here if you just use a transaction scope, then Entity Framework (EF) will use distributed transactions instead of local transactions. The reason is that, EF closes and opens the connection when ever required only, which means, it used 2 different connections for different SaveChanges() calls. To resolve this, use the following method. Here we are opening a connection explicitly so as not to span across multipel connections.   using (TransactionScope ts = new TransactionScope()) {     context.Connection.Open();     //Operation1 : context.SaveChanges();     //Operation2 :  context.SaveChanges()     //At the end close the connection     ts.Complete(); } catch (Exception ex) {       //Handle Exception } finally {       if (context.Connection.State == ConnectionState.Open)       {            context.Connection.Close();       } }   Case 2 – Transaction between DB & Non-DB operations: For example, assume that you have a table that keeps track of Emails to be sent. Here you want to update certain details like DataSent once if the mail was successfully sent by the e-mail client. Email eml = GetEmailToSend(); eml.DateSent = DateTime.Now; using (TransactionScope ts = new TransactionScope()) {    //Update DB    context.saveChanges();   //if update is successful, send the email using smtp client   smtpClient.Send();   //if send was successful, then commit   ts.Complete(); }   Here since you are dealing with a single context.SaveChanges(), you just need to use the TransactionScope, just before saving the context only.   Hope this was helpful!

    Read the article

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • Blogging is Hard

    - by Aaron Lazenby
    Not really. But wi-fi access is limited to common areas in the COLLABORATE 10 conference center here in Las Vegas. So my grand roving iPad blog update plan has been delayed a day while I measured signal strength and searched for a place to sit. Tuesday morning, I accomplished both. Yesterday I shot a nice, quick video of Bahseer Khan about embedded decision support--a part of his Oracle Fusion Applications presentation that I think could do with some additional discussion as we ramp up for Oracle's next-generation applications. I'll post that video here by the end of the day. Later today I'll also be interviewing OAUG president David Ferguson about the prevailing trends at COLLABORATE 10, the addition of Sun (and Sun's user groups) to the Oracle portfolio, and what the next 12 month holds in store for the Oracle user community. Look for that video later today too. If you can't wait for me to dash down to the lobby to make a blog update, don't forget that you can follow Profit at COLLABORATE 10 on Twitter (@OracleProfit). That way, you'll get updates about Billy Cripe's kilt in real time. More to come as this day develops. Next up: virtualization. Also, notes and coverage from yesterday's keynote presentation.

    Read the article

  • To the world, with love

    - by kaleidoscope
    kaleidoscope 1817, lit. "observer of beautiful forms," coined by its inventor, Sir David Brewster (1781-1868), from Gk. kalos "beautiful" + eidos "shape" (see -oid) + -scope, on model of telescope, etc. Figurative meaning "constantly changing pattern" is first attested 1819 in Lord Byron, whose publisher had sent him one. Let’s start by setting some context here. “We” are not a single blogger but a bunch of like minded people who will be contributing to this blog. We belong to a team led by some folks possessed by innovation, and this has rubbed on us in a good way. How it started It all started with initiative from Girish, A. A big thank you, goes to him. To get it straight from the horse’ mouth: What is it? - Everybody (as per the published schedule) post a small write-up (not more than say 5/6 lines) regarding any Azure related concept. - We shall consolidate all these mails (which would be 5/week) and quickly discuss/brain storm about it, end of the week i.e. on Fridays. What’s the benefit? - This should make our basic Azure concepts rock-solid. - As part of this exercise, we will have a very good collection of Azure FAQs. As the team grew stronger, so did the contributions and after almost 3 months of regular contributions and weekly discussions we thought of sharing the content with the world at large. Afterall we are IT folks and the big I in IT is there for a reason. :) The road ahead We will post the entire collection as time permits. Tagged by contributor. Going forward each contributor will post individually adding his/her specific tag. Get blogging!!

    Read the article

  • ?Portal Content Personalization

    - by john.brunswick
    To make the most effective use of a portal and content management platform, personalization is a critical component of delivering the most value to end users. Regardless of what type of constituents you may be serving, content relevance is critical to support business goals like self-service, communication within a geographically distributed organization, lead generation and customer loyalty effectively. This especially holds true when serving external parties, as they generally have a lower threshold for digging through your site to locate a particular item of interest and are apt to leave or dial a helpdesk if their efforts cannot locate the relevant information. Optimal delivery of content can be achieved through a variety of methods, but it is generally a blend of security and filtering via meta data that can drive the most return with the least amount of upfront effort and ongoing upkeep. In a portal environment various platform components have their strong suits and by combining the capabilities of enterprise portal and content platforms much of the groundwork for personalization can be achieved in a configuration-based manner. In our discussion we will cover terminology and concepts, example scenarios and technical implementation strategies to help showcase how personalization of content can be achieved within a portal from a technical and strategic standpoint. Read on to better understand the chart below and the components at our disposal to personalize content delivery. Read on... click here to view a full size chart

    Read the article

  • Trying to get MythTV working in Kubuntu 10.10

    - by user4109
    I'm trying to get MythTV working in Kubuntu. Unfortunately I've got the following problem: If I fire up the MythTV Frontend and select "Watch TV" a "Please wait..." label appears and after a while the screen falls back to the home screen. tail -f /var/log/mythtv/mythfrontend.log prints out the following: 2010-10-14 19:22:18.809 MythContext: Connecting to backend server: 127.0.0.1:6543 (try 1 of 1) 2010-10-14 19:22:18.811 Using protocol version 23056 2010-10-14 19:22:22.641 TV: Attempting to change from None to WatchingLiveTV 2010-10-14 19:22:22.641 MythContext: Connecting to backend server: 127.0.0.1:6543 (try 1 of 1) 2010-10-14 19:22:22.642 Using protocol version 23056 2010-10-14 19:22:22.715 Spawning LiveTV Recorder -- begin 2010-10-14 19:22:26.563 Spawning LiveTV Recorder -- end 2010-10-14 19:22:26.565 ProgramInfo(): Updated pathname '':'' -> '1005_20101014192226.mpg' 2010-10-14 19:22:26.569 We have a playbackURL(/var/lib/mythtv/livetv/1005_20101014192226.mpg) & cardtype(MPEG) 2010-10-14 19:22:33.070 RingBuf(/var/lib/mythtv/livetv/1005_20101014192226.mpg): Invalid file (fd -1) when opening '/var/lib/mythtv/livetv/1005_20101014192226.mpg'. 2010-10-14 19:22:33.072 We have a RingBuffer Then there is a whole bunch of those... 2010-10-14 19:22:33.186 RingBuf(/var/lib/mythtv/livetv/1005_20101014192226.mpg) error: Invalid file descriptor in 'safe_read()' ... before it falls back to the main menu. I've got a MSI TV@Anywhere Plus Tuner Card (Philips Semiconductors SAA7131/SAA7133/SAA7135 Video Broadcast Decoder). Any idea what could be the problem?

    Read the article

  • Community Events and Workshops in November 2012 #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    I and Alberto have a busy agenda until the end of the month, but if you are based in Northern Europe there are many chance to meet one of us in the next couple of weeks! Belgium, 20 November 2012 – SQL Server Days 2012 with Marco Russo I will present two sessions in this conference, “Data Modeling for Tabular” and “Querying and Optimizing DAX” Copenhagen, 21-22 November, 2012 – SSAS Tabular Workshop with Alberto Ferrari Alberto will be the speaker for 2 days – you can still register if you want a full immersion! Copenhagen, 21 November 2012 – Free Community Event with Alberto Ferrari (hosted in Microsoft Hellerup) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Munich, 27-28 November 2012 - SSAS Tabular Workshop with Alberto Ferrari The SSAS workshop will run also in Germany, this time in Munich. Also here there is still some seat still available. Munich, 27 November 2012 - Free Community Event with Alberto Ferrari (hosted in Microsoft ) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Moscow, 27-28 November 2012 – TechEd Russia 2012 with Marco Russo I will speak during the keynote on November 27 and I will present two session the day after, “Developing an Analysis Services Tabular Project BI Semantic Model” and “Excel 2013 PowerPivot in Action” Stockholm, 29-30 November 2012 - SSAS Tabular Workshop with Marco Russo I will run this workshop in Stockholm – if you want to register here, hurry up! Few seats still available! Stockholm, 29 November 2012 - Free Community Event (sold-out!) with Marco Russo In the evening I will present “Excel 2013 PowerPivot in Action” If you want to attend a SSAS Tabular Workshop online, you can also register to the Online edition of December 5-6, 2012, which is still in early bird and is scheduled with a friendly time zone for America’s countries (which could be good for Europe too, in case you don’t mind attending a workshop until midnight!).

    Read the article

  • Java Spotlight Episode 149: Geertjan Wielenga on NetBeans 7.4 @netbeans @geertjanw

    - by Roger Brinkley
    Interview with Geertjan Wielenga on the NetBeans 7.4 release Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Everything in JavaFX is open sourced - Media is now opensource openjdk email Java SE 7 Update 45 Released 7u45 Caller-Allowable-Codebase and Trusted-Library Updated Security Baseline (7u45) impacts Java 7u40 and before with High Security settings What to do if your applet is blocked or warns of “mixed code”? LiveConnect changes in 7u45 JDK 8 end-game details and proposal for making a few exceptions Events Oct 28-30, JAX London, London Nov 4-8, Oredev, Malmö, Sweden Nov 6, JFall, Amsterdam, Netherlands Nov 11-15, Devoxx, Belgium Feature Interview Geertjan Wielenga is a principal product manager in Oracle for NetBeans and has been a member of the NetBeans Team for the past 7 years. NetBeans twitter: https://twitter.com/netbeans Geertjan’s twitter: https://twitter.com/geertjanw The Top Ten Coolest Features in NetBeans IDE 7.4 What’s Cool Nighthacking with James Gosling Arun Gupta waves goodbye and says hello JavaOne 2013 Roundup: Java 8 is Revolutionary, Java is back Survey on Java erasure/reification

    Read the article

  • Why Standards Only Get You So Far

    - by Tim Murphy
    Over the years I have been exposed to a number of standards.  EDI was the first.  More recently it has been the CIECA standard for Insurance and now the embattled document standards of Open XML and ODF. Standards actually came up at the last CAG meeting.  The debate was over how effective they really are.  Even back in the late 80’s to early 90’s people found they had to customize these standards to get any work done.  I even had one vendor about a year ago tell me that they really weren’t standards, they were more of a guideline. The problem is that standards are created either by committee or by companies trying to sell a product.  They never fit all situations.  This is why most of them leave extension points in their definition.  Of course if you use those extension points everyone has to have custom code to know how to consume the new product. Standards increase reliability but they stifle innovation and slow the time to market cycle of products.  In this age of ever shortening windows of opportunity that could mean that a company could lose its competitive advantage. I believe that standards are not only good, but essential.  I also believe that they are not a silver bullet.  People who turn competing standards into a type of holy war are really missing the point.  I think we should make the best standards we can, whether that is for a product so that customers can use API, or by committee so that they cross products.  But they also need to be as feature rich and flexible as possible.  They can’t be just the lowest common denominator since this type of standard will be broken the day it is published.  In the end though, it is the market will vote with their dollars. del.icio.us Tags: Office Open XML,ODF,Standards,EDI

    Read the article

  • C#: My World Clock

    - by Bruce Eitman
    [Placeholder:  I will post the entire project soon] I have been working on cleaning my office of 8 years of stuff from several engineers working on many projects.  It turns out that we have a few extra single board computers with displays, so at the end of the day last Friday I though why not create a little application to display the time, you know, a clock.  How difficult could that be?  It turns out that it is quite simple – until I decided to gold plate the project by adding time displays for our offices around the world. I decided to use C#, which actually made creating the main clock quite easy.   The application was simply a text box and a timer.  I set the timer to fire a couple of times a second, and when it does use a DateTime object to get the current time and retrieve a string to display. And I could have been done, but of course that gold plating came up.   Seems simple enough, simply offset the time from the local time to the location that I want the time for and display it.    Sure enough, I had the time displayed for UK, Italy, Kansas City, Japan and China in no time at all. But it is October, and for those of us still stuck with Daylight Savings Time, we know that the clocks are about to change.   My first attempt was to simply check to see if the local time was DST or Standard time, then change the offset for China.  China doesn’t have Daylight Savings Time. If you know anything about the time changes around the world, you already know that my plan is flawed – in a big way.   It turns out that the transitions in and out of DST take place at different times around the world.   If you didn’t know that, do a quick search for “Daylight Savings” and you will find many WEB sites dedicated to tracking the time changes dates, and times. Now the real challenge of this application; how do I programmatically find out when the time changes occur and handle them correctly?  After a considerable amount of research it turns out that the solution is to read the data from the registry and parse it to figure out when the time changes occur. Reading Time Change Information from the Registry Reading the data from the registry is simple, using the data is a little more complicated.  First, reading from the registry can be done like:             byte[] binarydata = (byte[])Registry.GetValue("HKEY_LOCAL_MACHINE\\Time Zones\\Eastern Standard Time", "TZI", null);   Where I have hardcoded the registry key for example purposes, but in the end I will use some variables.   We now have a binary blob with the data, but it needs to be converted to use the real data.   To start we will need a couple of structs to hold the data and make it usable.   We will need a SYSTEMTIME and REG_TZI_FORMAT.   You may have expected that we would need a TIME_ZONE_INFORMATION struct, but we don’t.   The data is stored in the registry as a REG_TZI_FORMAT, which excludes some of the values found in TIME_ZONE_INFORMATION.     struct SYSTEMTIME     {         internal short wYear;         internal short wMonth;         internal short wDayOfWeek;         internal short wDay;         internal short wHour;         internal short wMinute;         internal short wSecond;         internal short wMilliseconds;     }       struct REG_TZI_FORMAT     {         internal long Bias;         internal long StdBias;         internal long DSTBias;         internal SYSTEMTIME StandardStart;         internal SYSTEMTIME DSTStart;     }   Now we need to convert the binary blob to a REG_TZI_FORMAT.   To do that I created the following helper functions:         private void BinaryToSystemTime(ref SYSTEMTIME ST, byte[] binary, int offset)         {             ST.wYear = (short)(binary[offset + 0] + (binary[offset + 1] << 8));             ST.wMonth = (short)(binary[offset + 2] + (binary[offset + 3] << 8));             ST.wDayOfWeek = (short)(binary[offset + 4] + (binary[offset + 5] << 8));             ST.wDay = (short)(binary[offset + 6] + (binary[offset + 7] << 8));             ST.wHour = (short)(binary[offset + 8] + (binary[offset + 9] << 8));             ST.wMinute = (short)(binary[offset + 10] + (binary[offset + 11] << 8));             ST.wSecond = (short)(binary[offset + 12] + (binary[offset + 13] << 8));             ST.wMilliseconds = (short)(binary[offset + 14] + (binary[offset + 15] << 8));         }             private REG_TZI_FORMAT ConvertFromBinary(byte[] binarydata)         {             REG_TZI_FORMAT RTZ = new REG_TZI_FORMAT();               RTZ.Bias = binarydata[0] + (binarydata[1] << 8) + (binarydata[2] << 16) + (binarydata[3] << 24);             RTZ.StdBias = binarydata[4] + (binarydata[5] << 8) + (binarydata[6] << 16) + (binarydata[7] << 24);             RTZ.DSTBias = binarydata[8] + (binarydata[9] << 8) + (binarydata[10] << 16) + (binarydata[11] << 24);             BinaryToSystemTime(ref RTZ.StandardStart, binarydata, 4 + 4 + 4);             BinaryToSystemTime(ref RTZ.DSTStart, binarydata, 4 + 16 + 4 + 4);               return RTZ;         }   I am the first to admit that there may be a better way to get the settings from the registry and into the REG_TXI_FORMAT, but I am not a great C# programmer which I have said before on this blog.   So sometimes I chose brute force over elegant. Now that we have the Bias information and the start date information, we can start to make sense of it.   The bias is an offset, in minutes, from local time (if already in local time for the time zone in question) to get to UTC – or as Microsoft defines it: UTC = local time + bias.  Standard bias is an offset to adjust for standard time, which I think is usually zero.   And DST bias is and offset to adjust for daylight savings time. Since we don’t have the local time for a time zone other than the one that the computer is set to, what we first need to do is convert local time to UTC, which is simple enough using:                 DateTime.Now.ToUniversalTime(); Then, since we have UTC we need to do a little math to alter the formula to: local time = UTC – bias.  In other words, we need to subtract the bias minutes. I am ahead of myself though, the standard and DST start dates really aren’t dates.   Instead they indicate the month, day of week and week number of the time change.   The dDay member of SYSTEM time will be set to the week number of the date change indicating that the change happens on the first, second… day of week of the month.  So we need to convert them to dates so that we can determine which bias to use, and when to change to a different bias.   To do that, I wrote the following function:         private DateTime SystemTimeToDateTimeStart(SYSTEMTIME Time, int Year)         {             DayOfWeek[] Days = { DayOfWeek.Sunday, DayOfWeek.Monday, DayOfWeek.Tuesday, DayOfWeek.Wednesday, DayOfWeek.Thursday, DayOfWeek.Friday, DayOfWeek.Saturday };             DateTime InfoTime = new DateTime(Year, Time.wMonth, Time.wDay == 1 ? 1 : ((Time.wDay - 1) * 7) + 1, Time.wHour, Time.wMinute, Time.wSecond, DateTimeKind.Utc);             DateTime BestGuess = InfoTime;             while (BestGuess.DayOfWeek != Days[Time.wDayOfWeek])             {                 BestGuess = BestGuess.AddDays(1);             }             return BestGuess;         }   SystemTimeToDateTimeStart gets two parameters; a SYSTEMTIME and a year.   The reason is that we will try this year and next year because we are interested in start dates that are in the future, not the past.  The function starts by getting a new Datetime with the first possible date and then looking for the correct date. Using the start dates, we can then determine the correct bias to use, and the next date that time will change:             NextTimeChange = StandardChange;             CurrentBias = TimezoneSettings.Bias + TimezoneSettings.DSTBias;             if (DSTChange.Year != 1 && StandardChange.Year != 1)             {                 if (DSTChange.CompareTo(StandardChange) < 0)                 {                     NextTimeChange = DSTChange;                     CurrentBias = TimezoneSettings.StdBias + TimezoneSettings.Bias;                 }             }             else             {                 // I don't like this, but it turns out that China Standard Time                 // has a DSTBias of -60 on every Windows system that I tested.                 // So, if no DST transitions, then just use the Bias without                 // any offset                 CurrentBias = TimezoneSettings.Bias;             }   Note that some time zones do not change time, in which case the years will remain set to 1.   Further, I found that the registry settings are actually wrong in that the DST Bias is set to -60 for China even though there is not DST in China, so I ignore the standard and DST bias for those time zones. There is one thing that I have not solved, and don’t plan to solve.  If the time zone for this computer changes, this application will not update the clock using the new time zone.  I tell  you this because you may need to deal with it – I do not because I won’t let the user get to the control panel applet to change the timezone. Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • Getting out of web-development before I make a huge investment? [closed]

    - by zhenka
    I am still in college. I've been doing web-app development for about a year now and I'm growing to hate it more and more. The whole thing feels like a huge hack and I am loosing my interest in programming because of it. Too much time is spent on learning tricks and libraries in javascript/css/html and battling the statelessness of it all. I don't so much mind back-end development, I just hate ALL of the frontend technology stack. What attracted me to programming was software architecture. I love design patterns, clean code, etc... I just feel like there is a lot more to play with in that regard in other forms of development. Moreover I feel like by becoming a Java or .Net expert I will be able to do A LOT more in terms of career choices. I would be able to do anything from server-side to desktop to mobile, but ruby, javascript, php, css etc... makes me completely unemployable in any other sub-domain of SE. Plus most of the learning on web seems to be technology tricks rather then becoming a better developer and expanding one's mind. Should I get out of it and start coding side mobile projects before I invest too much into it? Does anyone have any advice or perhaps share this feeling and moved out of it successfully? Thanks!

    Read the article

  • How to negotiate with software vendors who do not follow HL7 standards

    - by Peter Turner
    Take, for instance the "", I'd hope that anyone who has spent any time in dealing with HL7 messages knows that the "" signifies that something should be deleted. "" is not an empty string, it's not a filler etc... But occasionally, one may meet a vendor who persists in sending "" instead of just sending nothing at all. Since, I work for a small business and have an extremely flexible HL7 interface, I can ignore ""'s in received messages. But these things are adding up. Some vendors like to send custom formatted fields with psuedo-components that they leave others to interpret themselves. Some vendors send all their information in note segments and assume you're going to only show users the information they send in a monospace font. Some vendors even have the audacity to send Carriage Return Line Feeds at the end of each line of a file interface. Some vendors absolutely refuse to send decimal numbers and in-so-doing refuse to send any numbers. So, with all this crippling humanity against the simple plastic software man, how does one bend without breaking*? Or better yet, how does one fight back and still make money? *my answer is usually to create an interface for the interface and keep the HL7 processing pure, but I don't think this is the best solution

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • Stir Trek: Thor Edition Registration Opens March 17th

    - by Brian Jackett
    Registration for Stir Trek: Thor Edition opens at 12:00am “Thors"day March 17th.  Stir Trek is now in its third year and this is the second year I’ve helped with planning.  For those unfamiliar the Stir Trek conference here is the description from the website. Stir Trek is an opportunity to learn about the newest advances and latest trends in Web and Mobile development. There will be 30 Sessions in six tracks, so you can pick the content that interests you the most. And the best part? At the end of the day you will be treated to a private screening of Thor on its opening day!     Last year Stir Trek: Iron Man Edition sold out well before the conference and had a long waitlist.  Based on CodeMash selling out in just 3.5 days earlier this year I highly recommend you register early.  We also have a star studded list of speakers ranging from international experts to local leaders.  This will be the best $35 you spend all year.   Easter Egg:  I originally had an idea that we should start selling tickets at 1:30am rather than 12:00am.  If you can figure out why I proposed 1:30am leave a comment below.  Any good sleuths will find this riddle elementary.         -Frog Out

    Read the article

  • What are the pros (and cons) of using “Sign in with Twitter/Facebook” for a new website?

    - by Paul D. Waite
    Myself and a friend are looking to launch a little forum site. I’m considering using the “Sign in with Facebook/Twitter” APIs, possibly exclusively (a la e.g. Lanyrd), for user login. I haven’t used either of these before, nor run a site with user logins at all. What are the pros (and cons) of these APIs? Specifically: What benefits do I get as a developer from using them? What drawbacks are there? Do end users actually like/dislike them? Have you experienced any technical/logistical issues with these APIs specifically? Here are the pros and cons I’ve got so far: Pros More convenient for the user (“register” with two clicks, sign in with one) Possibly no need to maintain our own login system  Cons No control over our login process Exclude Facebook/Twitter users who are worried about us having some sort of access to their accounts Users’ accounts on our site are compromised if their Facebook/Twitter accounts are compromised. And if we don’t maintain our own alternative login system: Dependency on Facebook/Twitter for our login system Exclude non-Facebook/non-Twitter users from our site

    Read the article

  • SQL SERVER – Removing Leading Zeros From Column in Table – Part 2

    - by pinaldave
    Earlier I wrote a blog post about Remvoing Leading Zeros from Column In Table. It was a great co-incident that my friend Madhivanan (no need of introduction for him) also post a similar article over on BeyondRelational.com. I strongly suggest to read his blog as well as he has suggested some cool solutions to the same problem. On original blog post asked two questions 1) if my sample for testing is correct and 2) If there is any better method to achieve the same. The response was amazing. I am proud on our SQL Community that we all keep on improving on each other’s contribution. There are some really good suggestions as a comment. Let us go over them right now. Improving the ResultSet I had missed including all zeros in my sample set which was an overlook. Here is the new sample which includes all zero values as well. USE tempdb GO -- Create sample table CREATE TABLE Table1 (Col1 VARCHAR(100)) INSERT INTO Table1 (Col1) SELECT '0001' UNION ALL SELECT '000100' UNION ALL SELECT '100100' UNION ALL SELECT '000 0001' UNION ALL SELECT '00.001' UNION ALL SELECT '01.001' UNION ALL SELECT '0000' GO Now let us go over some of the fantastic solutions which we have received. Response from Rainmaker SELECT CASE PATINDEX('%[^0 ]%', Col1 + ' ‘') WHEN 0 THEN '' ELSE SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + ' '), LEN(Col1)) END FROM Table1 Response from Harsh Solution 1 SELECT SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + 'a'), LEN(Col1)) FROM Table1 Response from Harsh Solution 2 SELECT RIGHT(Col1, LEN(Col1)+1 -PATINDEX('%[^0 ]%', Col1 + 'a' )) FROM Table1 Response from lucazav SELECT T.Col1 , label = CAST( CAST(REPLACE(T.Col1, ' ', '') AS FLOAT) AS VARCHAR(10)) FROM Table1 AS T Response from iamAkashSingh SELECT REPLACE(LTRIM(REPLACE(col1,'0',' ')),' ','0') FROM table1 Here is the resultset of above scripts. It will remove any leading zero or space and will display the number accordingly. If you believe there is a better solution, please leave a comment. I am just glad to see so many various responses and all of them teach us something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Caching in the .NET Stack: Inside-Out

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/06/28/caching-in-the-.net-stack-inside-out.aspxI'm delighted to have my first course published on Pluralsight - Caching in the .NET Stack: Inside-out.   It's a pretty comprehensive look at caching in .NET solutions. The first half covers using local, remote and persistent cache stores inside the solution, including the .NET MemoryCache, NCache Express, AppFabric Caching, memcached, Azure Table Storage and local disk stores. The second half covers caching outside the solution in HTTP clients and proxies, and how to set up ASP.NET WebForms, MVC, Web API and WCF projects to use HTTP validation and expiration caching.   The course takes a hands-on approach, starting with a distributed solution that has no caching, analysing key points which can benefit from caching, and adding different types of cache. At the end of the course I run through a set of before and after performance tests, stressing the solution under load. Without caching and with 60 concurrent users the page response time maxes out at 18 seconds - with caching that falls to 2 seconds, so it's a huge improvement from very little effort. I’d be glad to hear feedback if you watch the course, especially if it’s as positive as my editor’s.

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >