Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 95/230 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Shared Development Space

    - by PatrickWalker
    Currently the company I work in gives each developer their own development virtual machine. On this machine (Windows 7) they install the entire stack of the product (minus database) this stack is normally spread amongst multiple machines of differing OS (although moving towards windows 2008 and 2008r2) So when a developer has a new project they are likely to be updating only a small piece of their stack and as such the rest of it can become out of date with the latest production code. The isolation from others means some issues won't be found until the code goes into shared test environments/production. I'm suggesting a move from functional testing on these isolated machines to plugging machines into a shared environment. The goal being to move towards a deployment thats closer to production in mechanism and server type. Developers would still make code changes on their Win7 vm and run unit/component testing locally but for functionally testing they would leverage a shared enviornment. Does anyone else use a shared development environment like this? Are there many reasons against this sort of sandbox environment? The biggest drawback is a move away from only checking in code when you've done local functional testing to checking in after static testing. I'm hoping an intelligent git branching strategy can take care of this for us.

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • SQLAuthority News – Don’t Be Afraid To Fool The World – Video by John Sonmez

    - by Pinal Dave
    Sometime some words and statements grabs your attention and it is hard to stop thinking about that after a while. Something similar happened a few days ago when I read the twitter statement of my friend and Pluralsight author John Sonmez. He twitted few days ago very interesting statement. “I don’t know a single successful person, who doesn’t deep down think that have the world fooled. #fooltheworld” by John Sonmez. When I read it, I was extremely intrigued by this statement. I read it many times, I shared with my family and I just could not stop interpreting this statement. It was indeed fun to read it again and again and there are so many different meanings one can take away from the statement. I know John very well, he is a  wonderful person and have very positive energy for the life. I just had to request him to build a video around it. Right after 5 days of my request, John created a wonderful video around this subject. I watched it multiple times as it was a wonderful video. I am not going to write about what was in the video much as I suggest you to watch the video itself. Here is one of the personal stories I want to share which is absolutely relevant to this video. I think my story 100% resonant the story of John. A Real Story from My Past Three years ago, I submitted a session in one of the SharePoint conference as a SQL Server session. My session was accepted and I prepared it very well. I put more than 2 month’s time to prepare for the session and I was very excited to present the session. I reached to the event place traveling thousands of the miles and I was very much excited to present the session. However, there was a little mixed up in the session. There were multiple session which were similar to my session title. One of the other speakers also had proposed a database related session and was selected. When the material went to print the printing team got confused and by mistake swapped the sessions. The other speaker got Performance with SQL Server session and I had received Performance with SharePoint session. IT was indeed a big mixed up but now that is how it was in the event guide and it was marketed the same way everything in the event. A Big Mix Up I had to talk with the event organizer and we come to the conclusion that we all had good intention but things just got mixed up and now was the time when “The show must go on“. I had a great amount of hesitation to go and present the session as I had personally never worked with Sharepoint so close in my life and my session abstracted talked about SharePoint tricks in depth. Two hours before the session I took the help of one of my friend and installed the SharePoint on my box. He showed me a few things here and there but it was never a good enough time to learn everything which I wanted to learn. The Moments of Confidence I was very scared and nervous to go on the stage as a SharePoint was not something I felt comfortable. However, I decided to go on stage with confidence as a SharePoint expert. Though I did not know SharePoint at the best, I had confidence that whatever I know is correct and I will not misguide people. I had no intention to fool people but I had no intention to accept that I am a fool and you all wasted your time and money to dedicate your time to attend my session. I decided to be honest but at the same time decided to take the session beyond my expertise. The sixty minutes of the session went very fine and I was able to manage all the difficult question at a satisfactory level. When the session was over my feeling was that I would have not presented or talked any different if I had more knowledge of the SharePoint at that time. I think it was one of my best sessions and it was reflected in the session feedback as well. I was the best speaker across all the track and my session had highest ranking. I was delighted and I learned a very valuable lesson. I must go beyond my limits and knowledge. I must aim higher and work harder. I should not lie but I should have confidence that I have a good heart and I put 100% in my efforts.  Lessions Learned Since this incident I have learned a lot about SharePoint and I am now a regular speaker at various SharePoint conferences along with SQL Server sessions. I am motivated and I am not afraid. I know people have lots of expectation from me but I have learned not to judge myself before I do my best. I leave the judgement of my efforts to my audience. I do not take the burden of the feedback on me, even though I know my audience have expected from me. I know what I know and I put my best. I must go out, if I fail, I learn from my mistake but I must keep my progress trajectory very high. As John said in the video, sometime success is not something we can achieve 100% but we can keep on going near to it. As long as we do not lose our focus from our goal and do not deviate from our progress path, we are doing things right. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SSO Configuration MMC Snap-in

    - by Christopher House
    This may be old news to most people but I've been away from BizTalk for about a year, so this was a welcome development for me.  The other day, I was discussing with my client the various options for storing configuration data required by our project.  I brought up SSO as it's something I've used with success on previous projects.  The client hadn't previously used SSO and was concerned about the maintainability of configuration stored in SSO.  I offered to do a quick POC to demonstrate storing/retrieving/maintaining configuration via SSO.  As I set about creating the POC, I needed to download Richard Seroter's SSO configuration tool, since that's what I've used previously for managing SSO data.  I went to google to track it down and was pleasantly surprised to discover that Microsoft has finally released an MMC snap-in for maintaining SSO applications. The download contains three components.  The first is the MMC snap-in which allows you to create/delete applications as well as name/value pairs within an application.  Next is a C# class file, SSOConfigHelper.cs, which can be used to retrieve values from an SSO application.  Finally, there's an MSBuild task that allows you to deploy SSO application data with your builds. I didn't see any information as to which versions are supported, I'm using it in a BizTalk 2009 environment and it seems to work quite nicely.  The download package is available here.

    Read the article

  • Problem resolving many of the Web Pages

    - by Aditya
    I am presently running Ubuntu 12.04 and using Chrome/Firefox along with OpenDNS (have tried Google Public DNS as well as DNS of my ISP). Suddenly, a lot of websites that I visit frequently don't load anymore. Some of them are imgur, yahoo, fed-sudoku, microsoft and addons page of firefox. I am sure there are many more that won't load. I have Windows 7 in Dual-Boot and there are no problems whatsoever in opening these pages on Windows. A little History: 2 weeks back I installed Ubuntu 12.10. I faced this issue right-away on Ubuntu 12.10. I thought something must have went wrong with the installation, so I removed Ubuntu 12.10 and instead installed Lubuntu 12.10 on it. But the same issue persisted on Lubuntu as well. So, I tried opening these webpages in Live Environments (of Ubuntu 12.10, Lubuntu 12.10 and Ubuntu 12.04.1) from USB. The issue was there for Ubuntu 12.10 and Lubuntu 12.10. However, I was able to access these webpages from Ubuntu 12.04.1. So, I installed 12.04.1 on my HardDisk. Everything on 12.04 was fine till yesterday; but suddenly, these sites don't load anymore. All this while, Windows 7 in Dual-Boot works flawlessly. Please help me resolve this issue.

    Read the article

  • Tykie

    - by Brian
    Here’s the obituary my mother wrote for Tykie, I still miss the little guy quite a bit. Anyone who’s interested in further information on hearing dogs should check out the IHDI website. I cannot begin to express how helpful a hearing dog can be for the hearing impaired. If you feel so inclined, please make a donation. In Memoriam, Tykie 1993-2010 The American Legion Post 401, South Wichita, KS, supported one of its members and commander by sponsoring a service dog for him. Unlike most service dogs this one was for the hearing impaired. Both Ocie and Betty Sims had hearing loss – Ocie more than Betty. The Post and Auxilliary had garage sales, auctions and other fund-raising endeavors to get donations for the dog. Betty made Teddy bears with growlers that were auctioned for donations to bring a hearing dog from International Hearing Dog, Henderson, Colorado. Tykie, a small wiry, salt and pepper terrier, arrived September 1, 1994 to begin his work that included attending Post 401 meetings and celebrations as well as raising more money to be donated to IHD to help others have hearing dogs. Tykie was a young dog less than a year old when he came to Wichita. He was always anxious to please and seldom barked, though he did put out a kind of cry when he was giving his urgent announcement that someone was at the door or the telephone was ringing. He also enjoyed chasing squirrels in the backyard garden that Ocie prized. In 1995, Betty almost died of a lung infection. Tykie was at the hospital with Ocie when he could visit. Several weeks after she was able to come home after a miraculous recovery, Tykie and Ocie went to a car show in downtown Wichita. Ocie’s retina tore loose in the only eye he could see out of and he almost blind was in great pain. How Ocie and Tykie got home is still a mystery, but the family legend goes that Tykie added seeing eye dog to his repertoire and helped drive him home. Health problems continued for Ocie and when he was placed in a nursing home, Tykie was moved to be Betty’s hearing dog. No problem for Tykie, he still saw his friends at the post and continued to help with visitors at the door. The night of May 3, 1999, Betty and Tykie were in the bedroom watching TV when Tykie began hitting her with both front paws as he would if something were urgent. She said later she thought he wanted to go out. As she and the dog walked down the hall towards the back of the house, Tykie hit her again with his front paws with such urgency that she fell into a small coat closet. That small 2-by-2 closet became their refuge as that very second the roof of her house went off as the f4 tornado raced through the city. Betty acquired one small wound on her hand from a piece of flying glass as she pulled Tykie into the closet with her. Tykie was a hero that day and a lot of days after. He kept Betty going as she rebuilt her home and after her husband died April 15, 2000. Tykie had to be cared for so she had to take him outside and bring him inside. He attended weddings of grandchildren and funerals of Post friends. When Betty died February 17, 2002 Tykie’s life changed again. IHD gave approval for his transfer and retirement to Betty and Ocie’s grandson, Brian Laird, who has a similar hearing loss to his grandfather. A few days after the funeral Tykie flew to his new home in Rutherford, NJ where he was able to take long walks for a couple of years before moving back to the Kansas City area. He was still full of adventure. He was written up in a book about service dogs and his story of the tornado and his picture appeared. He spent weekends at Brian’s mother’s farm to get muddy and be afraid of cats and chickens. He also took on an odyssey as he slipped from his fenced yard in Lenexa one day and walked more than seven miles in Overland Park traffic before being found by a good Samaritan who called IHD to find out where he belonged. Tykie was deaf for about the last two years of his long life and became blind as well, but he continued to strive to please. Tykie was 16 years and 4 months when he was cremated. His ashes were scattered on the graves of Betty and Ocie Sims at Greenwood Cemetery west of Wichita on the afternoon of March 21, 2010, with about a dozen family and Post 401 members. It is still the rule. Service dogs are the only dogs allowed inside the Post home. Submitted by Linda Laird, daughter of Betty and Ocie and mother of Brian Laird.

    Read the article

  • Techie Land Silly Questions

    - by GeekAgilistMercenary
    Ok, it is time for an off the cuff, random, oddball, just for fun blog entry.  Two questions for the readers in Internet Land. Question #1:  If you did not have to work, had a few dollars stashed away so that you could live comfortably and do whatever you wanted, what would you do?  Would you still code?  Would you still create?  What would you create?  Would you be able to stay idle? Question #2:  Based on whatever you did with your free time, what would you title yourself?  Chief Potato Masher, Pencil Pushing Writer o’ Stories, or Coffee Endeavorer o’ Tastiness? There are a million possibilities, I would love to know what you would call yourself, so please do leave a comment or three. I will have my answers later in the week.  So stay tuned and help me out with some comments.  You can bet it will include something along the lines of what I already do, but I'll keep it a secret until then.  : )  Feel free to check out the original entry here to leave a comment.

    Read the article

  • Cannot connect my Ubuntu to TV with HDMI

    - by Hannes Johannes
    Another problem in my way to Ubuntu world.. Trying to make things work Linux way, day 4, problem #5321: I have a Compaq laptop and Nvidia graphic adapter. I also have Panasonic TV, which I thought I could use as a second screen just like I used to to when I was still using Windows Vista with my laptop. Plug the screen to the comp and voilá, I can watch my videos on big screen. And a boll***s I can, not with ubuntu anyways. Nvidia x server (or whatever it's called) does recognize my tv. It's very badly designed gui as I can never be sure has it saved my changes or not (most often not), but I recon I've got that one sorted out. I haven't found a way to set a primary monitor there, but it does see there are two monitors, my laptop and the tv. So far so good. But then? Ubuntu's own monitor setting only sees the laptop screen. If I reboot (with the HDMI cable connected) I end up having a black screen. Then I have to take the cable away, cut the power off and restart the comp.. Please, help, any help would be appreciated! I would so much love to like this linux more than windows but it surely takes a lot of trying...

    Read the article

  • Free .NET Training at DevCare in Dallas...

    - by [email protected]
    Come take an early look at the debugging experience in VS 2010 this Friday (3/25/2010) at TekFocus in Dallas, at the InfoMart, at 9 AM: In this session, we’ll … Dive deep into the new IntelliTrace (formerly, historical debugging) feature, which enables you to step back in time within your debugging session and inspect or re-execute code, without having to restart your application See how to manage large numbers of breakpoints with labeling, searching and filtering Extend “data tips” by adding comments, notes and strategically “pinning” these resources to maintain their visibility throughout your session Demonstrate “collaborative debugging,“ by debugging a portion of an application and then exporting breakpoints and labeled data tips, so that others can leverage your effort, without having to start over Leverage these new debugging features in applications built in earlier versions of the .NET Framework through the MultiTargeting features available in VS 2010 You’ll walk-away with a clear understanding of how you can use this upcoming technology to vastly increase your productivity and build better software.Register to attend ==>  http://www.dallasdevcares.com/upcoming-sessions/ DevCares is a monthly series of FREE half-day events sponsored by TekFocus and Microsoft. Targeted specifically at developers, the content is presented by experts on a variety of .NET topics. These briefings include expert testimonials, working demos and sample code designed to help you get the most out of application development with .NET. Events are held on the last Friday of each month at the TekFocus offices in the Infomart near downtown Dallas.TekFocus is a full-service technology training provider with a core business delivering Microsoft-certified technical training and product skills enhancements to customers worldwide    

    Read the article

  • CloudBerry Online Backup 1.5 for Windows Home Server

    - by The Geek
    Overview CloudBerry Online Backup version 1.5 is a front end application for Amazon S3 storage for backing up your Windows Home Server data. It makes backing up your essential data to Amazon S3 an easy process in the event the disaster strikes. Installation You install the Cloudberry Addin as you do for any addins for Windows Home Server. On a PC on your network, browse to the shared folders on your server and open the Add-Ins folder and copy over WHS_CloudBerryOnlineBackupSetup_v1.5.0.81S3o.msi (link below), then close out of the folder. Next launch the Windows Home Server Console, click Settings, then Add-Ins. Click on the Available tab and click the Install button. It installs very quickly, and when you get the Installation Succeeded dialog click OK. You will lose connection through the Console, just click OK, then reconnect. After reconnecting, you’ll see CloudBerry Backup has been installed, and you can begin using it. You can setup a backup plan right away or find out what’s new with version 1.5. Amazon S3 Account If you don’t already have an Amazon S3 account, you’ll be prompted to create a new one. Click on the Create an account hyperlink, which takes you to the Amazon S3 page where you can sign up. After reviewing the functionality of Amazon S3, click on the Sign Up for Amazon S3 button. Enter in your contact information and accept the Amazon Web Services Customer Agreement. You’re then shown their pricing for storage plans. The amount of storage space you use will depend on your needs. It’s relatively cheap for smaller amounts of data. Just keep in mind the more data you store and download, the more S3 is going to cost. Note: Amazon S3 is introducing Reduced Redundancy Storage which will lower the cost of the data stored on S3. CloudBerry 1.5 will support this new feature. You can find out more about this new pricing structure. Note: Keep in mind that after you first sign up for an Amazon S3 account, it can take up to 24 hours to be authorized. In fact, you may want to sign up for the S3 account before installing the Add-In. After you sign up for your S3 Account, you’ll be given access credentials which you can enter in and create a Storage Bucket name. Features & Use CloudBerry is wizard driven, straight-forward and easy to use. Here we take a look at creating a backup plan. To begin, click on the Setup Backup Plan button to kick off the wizard. Select your backup mode based on the amount of features you want. In our example we’re going to select Advanced Mode as it offers more features than Simple Mode. Select your backup storage account or create a new one. You can select a default account by checking Use currently selected account as default. Now you can go through and select the files and folders you want to backup from your home server. Check the box Show physical drives to get more of a selection of files and folders. This also allows you to backup files from your data drive as well. It has full support for drive extenders so you can backup your shares as well. The cool thing about Cloudberry is it allows you to drill down specific files and folders unlike other WHS backup utilities. Next you can use advanced filters to specify files and/or folders to skip if you want. There are compression and encryption options as well. This will save storage space, bandwidth, and keep your data secure. Purge Options allow you to customize options for getting rid of older files. You can also select the option to delete files from the S3 service that have been deleted locally. Be careful with this option however, as you won’t be able to restore files if you delete them locally. You have some nice scheduling options from running backups manually, specific date and time, or recurring daily, weekly or monthly. Receive email notifications in all cases or when a backup fails. This is a good option so you know if things were successful or something failed, and you need to back it up manually. Email notifications… Give your plan a name… Then if the summary page looks good you can continue, or still go back at this point if something doesn’t look correct and needs adjusting. That’s it! You’re ready to go, and you have an option to start your first backup right away. After you’ve created a backup plan, you can go in and edit, delete, view history, or restore files. Restoring Files using CloudBerry To restore data from your backups kick off the Restore Wizard and select the backup to restore from. You can select the last backup, a specific point in time, or manually browse through the files. Browse through the directory and select the files you need to restore. Choose the destination to restore the files to. You can select from the original location, a specific location, to overwrite existing files, or set the location as the default for future restores. If the files are encrypted, enter in the correct passwords. If the summary looks good, click on Next to start the restore process. You’ll be shown a progress bar at the bottom of the screen while the files are restored. After the process has completed, close out of the Restore Wizard. In this example we restored a couple of music files to the desktop of Windows Home Server… But as shown above you can save them to the original location, other network locations, or WHS shared folders. This can make it a lot easier to keep track of files you’ve restored. You can also access different options for CloudBerry by clicking Settings in WHS Console then CloudBerry Backup. Here you can set up a new storage account, check for updates, app options, Diagnostics, and send feedback. Under Options there are several settings you can tweak to get the best experience for your WHS backups. CloudBerry Web Interface Another nice feature is the CloudBerry Web Interface so you can access your data from anywhere you have an Internet connection. To check it out in WHS Console, click on the Backup Web Interface link…you’ll probably want to bookmark the link in your favorite browser. Note: This feature is still in beta and at the time of this review, the Web Interface wasn’t up and running so we weren’t able to test it out. Performance The Cloudberry app works very well through the Windows Home Server Console. The amount of time it takes to backup or restore your data will depend on the speed of your Internet connection and size of the files. In our tests, backing up 1GB of data to the Amazon S3 account took around an hour, but we were running it on a DSL with limited upload speeds so your mileage will vary. Product Support In our experience, the team at CloudBerry offered great support in a timely manner when contacting them. You can fill out a help request through a form on their website and they also have a community forum. Conclusion We were very pleased with CloudBerry Online Backup for WHS. It’s wizard driven interface makes it extremely easy to use, and offers comprehensive backup choices for your Amazon S3 account. CloudBerry will only backup files that have been modified, so if files haven’t been changed, they won’t be backed up again.They offer a free 15 day trial and is $29.99 after that for a full license. Once you buy the app you own it, and charges to your S3 account will vary depending on the amount of data you upload. If you’re looking for an effective and easy to use front end application to backup your Windows Home Server data to your Amazon S3 account, CloudBerry is a recommended affordable choice. Download CloudBerry for Windows Home Server Sign Up For Amazon S3 Account Rating Installation: 9 Ease of Use: 8 Features: 8 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerBackup Windows Home Server Folders to an External Hard DriveBackup Your Windows Home Server Off-Site with Asus WebstorageRemove a Network Computer from Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Jailbreak Your Kindle for Dead Simple Screensaver Customization

    - by Jason Fitzpatrick
    If you’re less than delighted with the default screensaver pack on the Kindle relief is just a simple hack and a reboot away. Read on to learn how to apply a painless jailbreak to your Kindle and create custom screensavers. Unlike jailbreaking other devices like the iPad and Android devices—which usually includes deep mucking about in the guts of your devices and the potential, however remote, for catastrophic bricking—jailbreaking the Kindle is not only extremely safe but Amazon, by releasing the Kindle sourcecode, has practically approved the process with a wink and a nod. Installing the jailbreak and the screensaver hack to replace the default screensavers is so simple we promise you’ll spend 1000% more time messing around making fun screensaver images than you will actually installing the hack. The default screensaver pack for the Amazon Kindle is a collection of 23 images that include portraits of famous authors, woodcarvings from centuries past, blueprints, book reliefs, and other suitably literature-oriented subjects. If you’re not a big fan of the pack—and we don’t blame you if, despite Emily Dickinson being your favorite single lady, you want to mix things up—it’s extremely simple to replace the default screen saver pack with as many custom images as your Kindle can hold. This hack works on every Kindle except the first generation; we’ll be demonstrating it on the brand new Kindle 3 with accompanying notes to direct users with older Kindles. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • They Wrote The Book On It

    - by steve.diamond
    First of all, an apology to you all for my not posting this yesterday, when I should have. For those of you bloggers out there, you know the difference between "Save" and "Preview." But I temporarily forgot it. Nevertheless, while I'm not impressed with this mishap, I'm blown away by the initiative three of my colleagues have taken. Jeff Saenger, Tim Koehler, and Louis Peters, recently wrote a book, "Oracle CRM On Demand Deployment Guide." Not only that, they got this book PUBLISHED. These guys know their stuff. They have worked in the CRM industry for many years. And trust me, they command a lot of respect inside this organization. In the words of Louis Peters (who posted this verbiage yesterday on LinkedIn), "We've assembled all the best practices and lessons learned over the past six years working with CRM On Demand. The book covers a range of topics - working with SaaS-based applications, planning and executing a successful rollout, designing elegant and high-performing applications, and working effectively with Oracle. We even included several sample designs based on successful real-world deployments. Our main target audience is the CRM On Demand project team - sponsors, project managers, administrators, developers - really anyone planning, implementing or maintaining the application." Now these guys don't know it, but I'll be interviewing one of them and including audio excerpts of that conversation right here next Wednesday. In the meantime, if you want to learn more about successful CRM deployments in general, and working with Oracle CRM On Demand in particular, you should check out this book.

    Read the article

  • Pro BizTalk 2009

    - by Sean Feldman
    I have finished reading Pro BizTalk 2009 book from APress. This is a great book  if you’ve never dealt with BizTalk in the past and want to have a quick “on-ramp”. Although the book is very concerned about right way of building traditional BizTalk applications, it also dedicates a chapter to ESB Toolkit and does a good job in analyzing it. The fact that authors were concerned with subject such as coupling, hard-coding, automation, etc. makes it very interesting. One warning, if you are expecting to have a book that will guide you how to apply step by step examples, forget it. Nor this book does it, neither it’s possible due to multiple erratas found in it. I really was disappointed by the number of typos, inaccuracies, and technical mistakes. These kind of things turn readers away, especially when they had no experience with BT in the past. But this is the only bad thing about it. As for the rest – great content. Next week I am taking a deep dive course on BizTalk 2009. I really feel that this book has helped me a lot to get my feet wet. Next target will be ESB Tookit. To get to that, I will use what the book authors wrote “you have to understand how BizTalk as an engine works”.

    Read the article

  • Manually writing a dx11 tessellation shader

    - by Tudor
    I am looking for resources on what are the steps of manually implementing tessellation (I'm using Unity cg). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background: I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl+f this in gems: <normal.z, normal.y, -normal.x>) before the tessellator gets them. And I also need to keep my custom vert+frag setup (with my custom parameters/textures being passed between them) - so apparently I cannot use surface shaders. Can anyone provide some guidence?

    Read the article

  • Differences in cg shader code for OpenGL vs. for DirectX?

    - by Cray
    I have been trying to use an existing library that automatically generates shaders (Hydrax plugin for Ogre3D). These shaders are used to render water and somewhat involved, but are not extremely complicated. However there seems to be some differences in how the cg shaders are handled by OpenGL and DirectX, more specifically, I am pretty sure that the author of the library only has debugged all the shaders for DirectX, and they work flawlessly there, but not so in OpenGL. There are no compiler errors, but the result just doesn't look the same. (And I have to run the library in OpenGL.) Isn't cg supposed to be a language that can freely use the exact same code for both platforms? Are there any specific known caveats one should know about when using the same code for them? Are there any fast ways to find what parts of the code work differently? (I am pretty sure that the shaders are the problem. Otherwise Ogre3D has great support for both problems, and everything is abstracted away nicely. Other shaders work in OpenGL, etc...)

    Read the article

  • So long and thanks for the fish&hellip;

    - by Geoff N. Hiten
    This marks my last post as a SQLPASS Board member.  I learned a lot during my year of service and I thank everyone involved for this opportunity.  I would especially like to thank the Chapter leaders and Regional Mentors for Virtual Chapters who (mostly) patiently taught me about Virtual Chapters.   I hope the changes I put in place will help strengthen and grow VCs and PASS going forward.  I would also like to thank every one who encouraged me to reach beyond my comfort zone and accept a leadership position within the PASS organization.  My overall principle was to be a good steward of the PASS community.  Could I have done more?  Always. Did I do enough?  I hope so.  But PASS is a volunteer organization and my time, like yours, is limited.  I have other obligations in life that supersede PASS.  Now I have more time for some of those.  I won’t be going away or leaving the SQL Community.  I will still contribute to the community and support PASS, just in a different role.  Time to let somebody else enjoy the hot seat for a while. Finally, everyone who voted (not just for me) deserves a thanks.  More voters and more engaged voters, strong candidates, and a vigorous debate were all I wanted out of declaring as a candidate last year. This year the SQL community got exactly that. Thank you..

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • SQL SERVER – Auto Recovery File Settings in SSMS – SQL in Sixty Seconds #034 – Video

    - by pinaldave
    Every developer once in a while facing an unfortunate situation where they have not yet saved the work and their SQL Server Management Studio crashes. Well, you can minimize the loss by optimizing auto recovery settings. In this video we can see how to set the auto recovery settings. Go to SSMS >> Tools >> Options >> Environment >> AutoRecover There are two different settings: 1) Save AutoRecover Information Every Minutes This option will save the SQL Query file at certain interval. Set this option to minimum value possible to avoid loss. If you have set this value to 5, in the worst possible case, you can loose last 5 minutes of the work. 2) Keep AutoRecover Information for Days This option will preserve the AutoRecovery information for specified days. Though, I suggest in case of accident open SQL Server Management Studio right away and recover your file. Do not procrastinate this important task for future dates. Related Tips in SQL in Sixty Seconds: Manage Help Settings – CTRL + ALT + F1 SSMS 2012 Reset Keyboard Shortcuts to Default A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video Clear Drop Down List of Recent Connection From SQL Server Management Studio SELECT TOP Shortcut in SQL Server Management Studio (SSMS) What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Oracle Cloud Solutions at Cloud Expo West

    - by Gene Eun
    Oracle is proud to be the Platinum Sponsor at next week's Cloud Expo West in Santa Clara (Nov 4-7).  This is the third consecutive year that Oracle has been sponsoring Cloud Expo and each year our involvement and presence at the conference has grown. This year, we have a great lineup of sessions which I've listed below. If you’re attending Cloud Expo West, we'd love to have you attend our sessions that will show our thought leadership and leading solutions in the cloud. You should also swing by Booth #130 to see some of our latest cloud offerings firsthand. Date  Time  Session Title  Track  Room  Monday, Nov 4  3:00 pm - 3:45 pm Ten Myths of Cloud Computing - General Session All Tracks Ballroom A-H  Monday, Nov 4  5:10 pm - 5:55 pm Driving Recurring Revenue Streams Through Cloud Billing Cloud Computing and Big Data M1  Monday, Nov 4  5:10 pm - 5:55 pm An Introduction to Oracle's Cloud Application Marketplace Cloud Bootcamp Great America Room J  Tuesday, Nov 5  6:25 pm - 7:05 pm Delivering Database as a Service with Oracle Database 12c Deploying the Cloud Great America Room 2  Wednesday, Nov 6  5:35 pm - 6:20 pm Accelerating Your Journey to Self-Service IT Enterprise Cloud Computing B2  Thursday, Nov 7  1:35 pm - 2:20 pm Oracle's Strategy for Public Cloud Platform and Infrastructure Services Infrastructure Management | Virtualization M2 At Cloud Expo West, you'll get to learn about and experience the latest in Cloud and Big Data. If you're in Silicon Valley or the Bay Area and don't have a pass to Cloud Expo, no problem. Oracle is giving away FREE VIP Gold Passes! We would love to have you be our VIP guest. Just go to Oracle's Cloud Expo 2013 event registration page and follow the instructions to get your complimentary pass. Stay tuned to this blog and follow us on Twitter (@OracleCloudZone) during and after Cloud Expo for more of our insight and observations about this year's conference.

    Read the article

  • What's wrong with cplusplus.com?

    - by Kerrek SB
    This is perhaps not a perfectly suitable forum for this question, but let me give it a shot, at the risk of being moved away. There are several references for the C++ standard library, including the invaluable ISO standard, MSDN, IBM, cppreference, and cplusplus. Personally, when writing C++ I need a reference that has quick random access, short load times and usage examples, and I've been finding cplusplus.com pretty useful. However, I've been hearing negative opinions about that website frequently here on SO, so I would like to get specific: What are the errors, misconceptions or bad pieces of advice given by cplusplus.com? What are the risks of using it to make coding decisions? Let me add this point: I want to be able to answer questions here on SO with accurate quotes of the standard, and thus I would like to post immediately-usable links, and cplusplus.com would have been my choice site were it not for this issue. Update: There have been many great responses, and I have seriously changed my view on cplusplus.com. I'd like to list a few choice results here; feel free to suggest more (and keep posting answers). As of June 29, 2011: Incorrect description of some algorithms (e.g. remove). Information about the behaviour of functions is sometimes incorrect (atoi), fails to mention special cases (strncpy), or omits vital information (iterator invalidation). Examples contain deprecated code (#include style). Inexact terminology is doing a disservice to learners and the general community ("STL", "compiler" vs "toolchain"). Incorrect and misleading description of the typeid keyword.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >