Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 163/835 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • The busy developers guide to the Kinect SDK Beta

    - by mbcrump
    The Kinect is awesome. From day one, I’ve said this thing has got potential. After playing with several open-source Kinect projects, I am please to announce that Microsoft has released the official SDK beta on 6/16/2011. I’ve created this quick start guide to get you up to speed in no time flat. Let’s begin: What is it? The Kinect for Windows SDK beta is a starter kit for applications developers that includes APIs, sample code, and drivers. This SDK enables the academic research and enthusiast communities to create rich experiences by using Microsoft Xbox 360 Kinect sensor technology on computers running Windows 7. (defined by Microsoft) Links worth checking out: Download Kinect for Windows SDK beta – You can either download a 32 or 64 bit SDK depending on your OS. Readme for Kinect for Windows SDK Beta from Microsoft Research  Programming Guide: Getting Started with the Kinect for Windows SDK Beta Code Walkthroughs of the samples that ship with the Kinect for Windows SDK beta (Found in \Samples Folder) Coding4Fun Kinect Toolkit – Lots of extension methods and controls for WPF and WinForms. Kinect Mouse Cursor – Use your hands to control things like a mouse created by Brian Peek. Kinect Paint – Basically MS Paint but use your hands! Kinect for Windows SDK Quickstarts Installing and Using the Kinect Sensor Getting it installed: After downloading the Kinect SDK Beta, double click the installer to get the ball rolling. Hit the next button a few times and it should complete installing. Once you have everything installed then simply plug in your Kinect device into the USB Port on your computer and hopefully you will get the following screen: Once installed, you are going to want to check out the following folders: C:\Program Files (x86)\Microsoft Research KinectSDK – This contains the actual Kinect Sample Executables along with the documentation as a CHM file. Also check out the C:\Users\Public\Documents\Microsoft Research KinectSDK Samples directory: The main thing to note here is that these folders contain the source code to the applications where you can compile/build them yourself. Audio NUI DEMO Time Let’s get started with some demos. Navigate to the C:\Program Files (x86)\Microsoft Research KinectSDK folder and double click on ShapeGame.exe. Next up is SkeletalViewer.exe (image taken from http://www.i-programmer.info/news/91-hardware/2619-microsoft-launch-kinect-sdk-beta.html as I could not get a good image using SnagIt) At this point, you will have to download Kinect Mouse Cursor – This is really cool because you can use your hands to control the mouse cursor. I actually used this to resize itself. Last up is Kinect Paint – This is very cool, just make sure you read the instructions! MS Paint on steroids! A few tips for getting started building Kinect Applications. It appears WPF is the way to go with building Kinect Applications. You must also use a version of Visual Studio 2010.  Your going to need to reference Microsoft.Research.Kinect.dll when building a Kinect Application. Right click on References and then goto Browse and navigate to C:\Program Files (x86)\Microsoft Research KinectSDK and select Microsoft.Research.Kinect.dll. You are going to want to make sure your project has the Platform target set to x86. The Coding4Fun Kinect Toolkit really makes things easier with extension methods and controls. Just note that this is for WinForms or WPF. Conclusion It looks like we have a lot of fun in store with the Kinect SDK. I’m very excited about the release and have already been thinking about all the applications that I can begin building. It seems that development will be easier now that we have an official SDK and the great work from Coding4Fun. Please subscribe to my blog or follow me on twitter for more information about Kinect, Silverlight and other great technology.  Subscribe to my feed

    Read the article

  • Motorola Droid App Recommendations

    - by Brian Jackett
    Just as a disclaimer, the views and opinions expressed in this post are solely my own and I’m not getting paid or compensated for anything.     Ok, so I’m one of the crazy few who went out and bought a Droid the week it was released a few months back.  The Motorola Droid was a MAJOR upgrade in phone capabilities for me as my previous phone had no GPS, no web access, limited apps, etc.  I now use my Droid for so much of my life from work to personal to community based events.  Since I’ve been using my Droid for awhile, a number of friends (@toddklindt, @spmcdonough, @jfroushiii, and many more) who later got a Droid asked me which apps I recommended.  While there are a few sites on the web listing out useful Android apps, here’s my quick list (with a few updates since first put together.) Note: * denotes a highly recommended app     Android App Recommendations for Motorola Droid (Updated after 2.1 update) RemoteDroid – install a thin client on another computer and Droid becomes mouse pad / keyboard, control computer remotely PdaNet – free version allows tethering (only to HTTP, no HTTPS) without paying extra monthly charge.  A paid version allows HTTPS access. SportsTap – keep track of about a dozen sports, favorite teams, etc *Movies – setup favorite theaters, find movie times, buy tickets, etc WeatherBug elite – paid app, but gives weather alerts, 4 day forecast, etc.  Free version also exists.  (Update: Android 2.1 offers free weather app, but I still prefer WeatherBug.) *Advanced Task Killer – manually free up memory and kill apps not needed Google Voice – have to have a Google Voice account to really use, but allows visual voice mail, sending calls to specific phones, and too many other things to list AndroZip – access your phone memory like a file system Twidroid – best Twitter client I’ve found so far, but personal preference varies.  I’m using free version and suits me just fine. Skype (beta) – I only use this to send chat messages, not sure how/if phone calls works on this. (Update: Skype Mobile app just released, but uninstalled after few days as it kept launching in background and using up memory when not wanted.) *NewsRob – RSS reader syncs to Google Reader.  I use this multiple times a day, excellent app. (Update: this app does ask for your Google username and password, so security minded folks be cautioned.) ConnectBot – don’t use often myself, but allows SSH into remote computer.  Great if you have a need for remote manage server. Speed Test – same as the online website, allows finding upload/download speeds. WiFinder – store wifi preferences and find wifi spots in area. TagReader – simple Microsoft Tag Reader, works great. *Google Listen – audible podcast catcher that allows putting items into a queue, sync with Google Reader RSS, etc. I personally love this app which has now replaced the iPod I used to use in my car, but have heard mixed reviews from others. Robo Defense – (paid app) tower defense game but with RPG elements to upgrade towers over lifetime playing. I’ve never played FieldRunners but I’m told very similar in offering. Nice distraction when in airport or have some time to burn. Phit Droid 3rd Edition – drag and drop block shapes into a rectangle box, simple game to pass the time with literally 1000s of levels. Note this game has been updated dozens of times with numerous editions so unsure exactly which are still on the market. Google Sky Map – impress your friends by holding Droid up to sky and viewing constellations using Droid screen. wootCheck Lite – check up on daily offerings on Woot.com and affiliated wine, sellout, shirt, and kids sites.   Side notes: I’ve seen that Glympse and TripIt have recently come out with Android apps.  I’ve installed but haven’t gotten to use either yet, but I hear good things.  Will try out on 2 upcoming trips in May and update with impressions.         -Frog Out   Image linked from http://images.tolmol.com/images/grpimages/200910191814100_motorola-droid.gif

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • Agile Manifesto, Revisited

    - by GeekAgilistMercenary
    Again, conversations give me a zillion things to write about.  The recent conversation that has cropped up again is my various viewpoints of the Agile Manifesto.  Not all the processes that came after the manifesto was written, but just the core manifesto itself.  Just for context, here is the manifesto in all the glory. We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Several of the key signatories at the time went on to write some of the core books that really gave Agile Software Development traction.  If you check out the Agile Manifesto Site and do a search for any of those people, you will find a treasure trove of software development information. My 2 Cents First off, I agree with a few people out there.  Agile is not Scrum for instance.  Do NOT get these things confused when checking out Agile, or pushing forward with Scrum.  As David Starr points out in his blog entry, "About 35 minutes into this discussion, I realized I hadn?t heard a question or comment that wasn?t related to Scrum. I asked the room, ?How many people are on an agile team that is NOT using Scrum?? 5 hands. Seriously, out of about 150 people of so. 5 hands." So know, as this is one of my biggest pet peves these days, that Scrum is not Agile.  Another quote David writes, "I assure you, dear reader, 2 week time boxes does not an agile team make." This is the exact problem.  Take a look at the actual manifesto above.  First ideal, "Individuals and interactions over processes and tools".  There are a couple of meanings in this ideal, just as there are in the other written ideals.  But this one has a lot of contention with a set practice such as Scrum.  There are other formulas, namely XP (eXtreme) and Kanban are two that come to mind often.  But none of these are Agile, but instead a process based on the ideals of Agile. Some of you may be thinking, "that?s the same thing".  Well, no, it is not.  This type of differentiation is vitally important.  Agile is a set of ideals.  Processes are nice, but they can change, they may work for some and not others.  The Agile Manifesto covers the ideals behind what is intended, that intention being to learn and find new ways to build better software. Ideals, not processes.  Definition versus implementation.  Class versus object.  The ideals are of utmost importance, the processes are secondary, the first ideal is what really lays this out for me "Individuals and interactions over processes and tools".  Yes, we need tools but we need the individuals and their interactions more. For those coming into a development team, I hope you take this to mind.  It is of utmost importance that this differentiation is known and fought for.  The second the process becomes more important than the individuals and interactions, the team will effectively lose the advantages of Agile Ideals. This is just one of my first thoughts on the topic of Agile.  I will be writing more in the near future about each of the ideals.  I will make a point to outline more of my thoughts, my opinions, and experience with the ideals of Agile and the various processes that are out there.  Maybe, I may stumble upon something new with the help of my readers?  It would be a grand overture to the ideals I hold. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Stuck due to "knowing too much"

    - by Ran Biron
    Note more discussion at http://news.ycombinator.com/item?id=4037794 Welcome Hacker News Visitors! While HN is a fine forum for discussion and debate, Programmers - Stack Exchange is not. From the FAQ: If your motivation for asking the question is “I would like to participate in a discussion about ____”, then you should not be asking here. However, if your motivation is “I would like others to explain ____ to me”, then you are probably OK. (Discussions are of course welcome in our real time web chat.) Currently, this question is viewed by the membership of Programmers.SE as more likely to provoke unproductive discussion than constructive answers; while debates on its form and future are conducted, it will be locked to prevent arguments and vandalism. -- Shog9 I have a relatively simple development task, but every time I try to attack it, I end up spiraling in deep thoughts - how could it extending the future, what are the 2nd generation clients going to need, how does it affect "non functional" aspects (e.g. Performance, authorization...), how would it best be architectured to allow change... I remember myself a while ago, younger and, perhaps, more eager. The "me" I was then wouldn't have given a thought about all that - he would've gone ahead and wrote something, then rewrote it, then rewrote it again (and again...). The "me" today is more hesitant, more careful. I find it much easier today to sit and plan and instruct other people on how to do things than to actually go ahead and do them myself - not because I don't like to code - the opposite, I love to! - but because every time I sit at the keyboard, I end up in that same annoying place. Is this wrong? Is this a natural evolution, or did I drive myself into a rut? Fair disclosure - in the past I was a developer, today my job title is a "system architect". Good luck figuring what it means - but that's the title. Wow. I honestly didn't expect this question to generate that many responses. I'll try to sum it up. Reasons: Analysis paralysis / Over engineering / gold plating / (any other "too much thinking up-front can hurt you"). Too much experience for the given task. Not focusing on what's important. Not enough experience (and realizing that). Solutions (not matched to reasons): Testing first. Start coding (+ for fun) One to throw away (+ one API to throw away). Set time constraints. Strip away the fluff, stay with the stuff. Make flexible code (kinda opposite to "one to throw away", no?). Thanks to everyone - I think the major benefit here was to realize that I'm not alone in this experience. I have, actually, already started coding and some of the too-big things have fallen off, naturally. Since this question is closed, I'll accept the answer with most votes as of today. When/if it changes - I'll try to follow.

    Read the article

  • Pondering New Technology

    - by MOSSLover
    So I have been standing at the end of a fork for a year peering down a corner looking at this way and that way trying to figure out where I fit in.  I was so enthusiastic and excited about Silverlight when it came out.  It was this amazing awesome technology that had this really cool animation and webcam/multimedia piece.  I thought if I put my money on Silverlight it’s going somewhere, then HTML 5 came out and the wind shifted.  I realized times were changing. I have been working with web technologies since I was 15 years old.  Playing with html and javascript and even css back when it first came out.  In tech years 15 years is forever.  Things change so quickly and so often.  So I guess the question is where am I heading?  The answer is mobile technology.  For some reason I was resisting change and I have no idea why.  I guess I really wanted to see more than one player.  I didn’t quite feel that Microsoft was ready with Windows Phone 7.  It was a great start, but it just didn’t feel like they went all the way.  Now with Windows 8 it feels like they are at version 2.0.  It’s like hitting Silverlight 2.0 where they finally added the .Net bits.  The path is paved, but we don’t know where it’s leading.  Then we had 3.0, 4.0, and 5.0 to mature the technology into what it is today (man I’m hoping they are going to roll some of the cool bits into other tech if they don’t exist). Anyway, I’m on board, but I’m not buying a Windows Tablet just yet.  I was hoping for a 7 inch screen from Apple around $300 or just above and a 7 inch screen from the MS side around the same price.  What I got was the Apple side, but nothing from Microsoft.  I was pretty disappointed with the $500 market price on the RT version.  I realized Microsoft is close, but not quite where Apple is today.  Yes the devices have Office that they are offering, but the sticker is just too much for a first generation device.  If you guys remember correctly the first generation iPad was quite expensive.  I guess for a 1st generation device $500 is pretty good. So I guess what I’m trying to say is that I am shifting my focus entirely away from Silverlight and more towards mobile.  I will be doing a lot of postings on iOS, Windows 8, and Windows Phone 8 with SharePoint 2013.  Since I don’t have a tablet and don’t foresee myself buying one just yet it might be mostly on the phones for right now.  I want to do a bunch of testing on various devices on what needs to be done in apps on each device.  I might add a bit on porting code from one to the other.  I think it’s going to be a lot of fun and make things flow a little better for me.  In a way it’s kind of like Star Trek 6 where they talk about the Undiscovered Country.  I’m going to jump forward completely and see where I land. Technorati Tags: SharePoint 2013,Mobile,Windows 8,iOS

    Read the article

  • Surface V2.0

    - by Dennis Vroegop
    It’s been quiet around here. And the reason for that is that it’s been quiet around Surface for a while. Now, a lot of people assume that when a product team isn’t making too much noise that must mean they stopped working on their product. Remember the PDC keynote in 2010? Just because they didn’t mention WPF there a lot of people had the idea that WPF was dead and abandoned for Silverlight. Of course, this couldn’t be farther from the truth. The same applies to Surface. While we didn’t hear much from the team in Redmond they were busy putting together the next version of the platform. And at the CES in January the world saw what they have been up to all along: Surface V2.0 as it’s commonly known. Of course, the product is still in development. It’s not here yet, we can’t buy one yet. However, more and more information comes available and I think this is a good time to share with you what it’s all about! The biggest change from an organizational point of view is that Microsoft decided to stop producing the hardware themselves. Instead, they have formed a partnership with Samsung who will manufacture the devices. This means that you as a buyer get the benefits of a large, worldwide supplier with all the services they can offer. Not that Microsoft didn’t do that before but since Surface wasn’t a ‘big’ product it was sometimes hard to get to the right people. The new device is officially called the “Samsung SUR 40 for Microsoft Surface” which is quite a mouthful. The software that runs the device is of course still coming from Microsoft. Let’s dive into the technical specs (note: all of this is preliminary, it’s still in the Alpha phase!): Audio out HDMI / StereoRCA / SPDIF / 2 times 3.5mm audio out jack Brightness 300 CD/m2 Communications 1GB Ethernet/802.11/Bluetooth Contrast Ratio 1:1000 CPU AMD Athlon X2 245e 2.9Ghz Dual Core Display Resolution Full HD 1080p 1920x1080 / 16:9 aspect ratio GPU AMD Radeon HD 6750 1GB GDDRS HDD 320 GB / 7200 RPM HDMI In / HDMI out Yes I/O Ports 4 USB, SD Card reader Operation System Embedded Windows 7 Professional 64 bits Panel Size 40” diagonal Protection Glass Gorilla Glass RAM 4 GB DD3 Weight / with standard legs 70.0 Kg / 154 lbs Weight / standalone 39.5 Kg / 87 lbs Height (without legs) 4 inch Contact points recognized > 50 Cool Factor Extremely   Ok, the last point is not official, but I do think it needs to be there. Let’s talk software. As noted, it runs Windows 7 Professional 64 bit, which means you can run Visual Studio 2010 on it. The software is going to be developed in WPF4.0 with the additional Surface SDK 2.0. It will contain all the things you’ve seen before plus some extra’s. They have taken some steps to align it more with the Surface Toolkit which you can download today, so if you do things right your software should be portable between a WPF4.0 Windows 7 Multi-touch app and the Surface v2 environment. It still uses infrared to detect contacts, so in that respect nothing much has changed conceptually. We still can differentiate between a finger, a tag or a blob. Of course, since the new platform has a much higher resolution (compared to the 1024x768 of the first version) you might need to look at your code again. I’ve seen a lot of applications on Surface that assume the old resolution and moving that to V2 is going to be some work. To be honest: as I am under NDA I cannot disclose much about the new software besides what I have told you here, but trust me: it’s going to blow people away. Now, the biggest question for me is: when can I get one? Until we can, have a look here: Tags van Technorati: surface,samsung,WPF

    Read the article

  • regarding the Windows Phone 7 series, XNA and Visual Basic

    - by Chris Williams
    as long as we're talking about VB... I figured I would share this as well. Hi everyone, I'm about to express a sentiment that might ruffle a few feathers, but I think most of you know me well enough to know I love like accept VB for what it is and that what I'm about to say is with good intentions. (The rest of you, who don't know me, please take my word for it.) The world is full of VB developers, I was one of them for a long time. I think it's safe to assume that none of us are ignorant people who require handholding. We're working professionals, making a living by using our skills as developers. I'm also willing to bet that quite a few of us are fluent in C# as well as VB. It may not be your preferred language, but many of you can do it and you prove that nearly every day. Honestly, I don't know ANY developers or consultants that have only known ONE language ever. So it pains me greatly when I see the word "CAN'T" being tossed around like a crutch... as in "we CAN'T develop for the windows phone or we CAN'T develop XNA games." At MIX, Microsoft hath decreed that C# is the language of choice for developing for the Windows Phone 7. I think it's a safe bet that you won't see VB support if it isn't there already. (Just like XNA... which is up to version 4.0 by now.)  So what? (Yeah... I said it.) I think everyone here can agree that actual coding is only one part of software design and development. There is nothing stopping ANY of you from beginning the process of designing your killer phone app, writing up specs, requirements, doing UI design, workflow, mockups, storyboards, art, etc.... None of these things are language dependent. IF by the time you've got that stuff out of the way, and there's still no VB support, then start doing some rapid prototyping of your app in C# (I know, I know... heresy!)  You still have to spend time learning how the phone does things, what UI tricks do what, what paradigms make sense, how to use to accelerometer and the tilt and the multitouch functionality. I can guarantee you that time spent doing this is a great investment, no matter WHAT extension your code files have. Eventually, you may have a working prototype. IF by this time, there's STILL no VB support... fret not, you've made significant progress on your app. You've designed it, prototyped it, figured out how to use the phone specific features... so you might as well finish it and pat yourself on the back for learning something new... and possibly being first to market with your new app. I'll be happy to argue any and all of these points online or off with anyone who cares to do so, but there is one undeniable point that you simply can't argue:  Your potential customers do not care AT ALL what programming language you used to write the app they are about to purchase. They care that it works. If your biggest concern is being first to market, than stop complaining and get busy because you're running out of time and the 3000+ people who were at MIX certainly aren't waiting for you. They've already started working on their apps.

    Read the article

  • T-SQL Tuesday - the swag

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Kendal van Dyke (@SQLDBA), and is on the topic of swag. He asks about the best SQL Server swag that we’ve ever received from a conference. I can’t say I ever focus on getting the swag at conferences, as I see some people doing. I know there are plenty of people that get around all the sponsors as soon as they’ve arrived, collecting whatever goodies they can, sometimes as token gifts for those at home, sometimes as giveaways for the user groups they attend. I remember a few years ago at my first PASS Summit, the SQLCAT team gave me a large pile of leftover SQL Server swag to give away to my user group – piles of branded things to stop your phone sliding off your car dashboard, and other things. The user group members thought it was great, and over the course of a few months, happily cleared me out of it all. I tend to consider swag to be something that you haven’t earned except by being at a conference, and there was no winning associated with it, it was simply a giveaway item at a sponsor booth. That means I don’t include the HP Mini laptop that was given away at TechEd Australia a few years ago to every attendee, or the SQL Server bag and Camelbak bottle that I was given as a thank-you for writing a guest blog post (which I use as my regular laptop bag and water bottle for work). I don’t even include the copy of Midtown Madness that I got as a door prize at my vey first TechEd event in 1999 (that was a really good game, and even meant that when I went to Chicago last year, I felt a strange familiarity about the place). I don’t want to include shirts in the mix either. I was given a nice SQL Server shirt about five years ago TechEd Australia. It’s a business shirt (buttons, cuffs, pocket on the chest), black with the SQL Server logo on it. It was such a nice shirt that I commented about it to the Product Marketing Manager for Australia (Christine, at the time), who unexpectedly arranged for me to get another one. That was certainly an improvement on the tent I was given at one of the MVP conference I attended. So when I consider these ‘rules’, two pieces of swag come to mind, and I think both were at PASS Summits (although I can’t be sure). One was a hand-warmer from HP, one of the “crystallisation-type” ones, which proved extremely popular when I got home, until one day when it didn’t survive being recharged – not overly SQL related, but still it was good swag. The other was an umbrella, from expressor, which was from the PASS Summit in 2010, my first PASS Summit. I remember it well – Blythe Morrow (now Gietz) (@blythemorrow) was working the booth, having stopped working for PASS some time before, but she’d been on my list of people to meet, as I’d had plenty of contact with her while she’d worked at PASS, my being a chapter leader and general volunteer. There had been an expressor dinner on one of the first evenings, which I’d been asked to be at, which is when I’d met lots of SQL people in person for the first time, including Ted Krueger (@onpnt), Jessica Moss (@jessicamoss) and Blythe. Anyway, at some point the next day I swung by their booth to say hello and thank them for the dinner, and Blythe says “Oh, we have the best swag – here!” and handed me an umbrella. And she was right. It’s excellent. @rob_farley

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • Binding a select in a client template

    - by Bertrand Le Roy
    I recently got a question on one of my client template posts asking me how to bind a select tag’s value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought I’d write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix. It just works. Here’s the full source code for my sample page: <!DOCTYPE html> <html> <head> <title>Binding a select tag</title> <script src=http://ajax.microsoft.com/ajax/beta/0911/Start.js type="text/javascript"></script> <script type="text/javascript"> Sys.require(Sys.scripts.Templates, function() { var colors = [ "red", "green", "blue", "cyan", "purple", "yellow" ]; var things = [ { what: "object", color: "blue" }, { what: "entity", color: "purple" }, { what: "thing", color: "green" } ]; Sys.create.dataView("#thingList", { data: things, itemRendered: function(view, ctx) { Sys.create.dataView( Sys.get("#colorSelect", ctx), { data: colors }); } }); }); </script> <style type="text/css"> .sys-template {display: none;} </style> </head> <body xmlns:sys="javascript:Sys"> <div> <ul id="thingList" class="sys-template"> <li> <span sys:id="thingName" sys:style-color="{binding color}" >{{what}}</span> <select sys:id="colorSelect" sys:value="{binding color}" class="sys-template"> <option sys:value="{{$dataItem}}" sys:style-background-color="{{$dataItem}}" >{{$dataItem}}</option> </select> </li> </ul> </div> </body> </html> This produces the following page: Each of the items sees its color change as you select a different color in the drop-down. Other details worth noting in this page are the use of the script loader to get the framework from the CDN, and the sys:style-background-color syntax to bind the background color style property from markup. Of course, I’ve used a fair amount of custom ASP.NET Ajax markup in here, but everything could be done imperatively and with completely clean markup from the itemRendered event using Sys.bind.

    Read the article

  • Free tools versus paid tools.

    - by Dennis Vroegop
    We live in a strange world. Information should be free. Tools should be free. Software should be free (and I mean free as in free beer, not as in free speech). Of course, since I make my living (and pay my mortgage) by writing software I tend to disagree. Or rather: I want to get paid for the things I do in the daytime. Next to that I also spend time on projects I feel are valuable for the community, which I do for free. The reason I can do that is because I get paid enough in the daytime to afford that time. It gives me a good feeling, I help others and it’s fun to do. But the baseline is: I get paid to write software. I am sure this goes for a lot of other developers. We get paid for what we do during the daytime and spend our free time giving back. So why does everyone always make a fuzz when a company suddenly starts to charge for software? To me, this seems like a very reasonable decision. Companies need money: they have staff to pay, buildings to rent, coffee to buy, etc. All of this doesn’t come free so it makes sense that they charge their customers for the things they produce. I know there’s a very big Open Source market out there, where companies give away (parts of) their software and get revenue out of the services they provide. But this doesn’t work if your product doesn’t need services. If you build a great tool that is very easy to use, and you give it away for free you won’t get any money by selling services that no user of your tool really needs. So what do you do? You charge money for your tool. It’s either that or stop developing the tool and turn to other, more profitable projects. Like it or not, that’s simple economics at work. You have something other people want, so you charge them for it. This week it was announced that what I believe is the most used tool for .net developers (besides Visual Studio of course),namely Red Gates .net reflector, will stop being a free tool. They will charge you $35 for the next version. Suddenly twitter was on fire and everyone was mad about it. But why? The tool is downloaded by so many developers that it must be valuable to them. I know of no serious .net developer who hasn’t got it on his or her machine. So apparently the tool gives them something they need. So why do they expect it to be free? There are developers out there maintaining and extending the tool, building new and better versions of it. And the price? $35 doesn’t seem much. If I think of the time the tool saved me the 35 dollars were earned back in a day. If by spending this amount of money I can rely on great software that helps me do my job better and faster, I have no problems by spending it. I know that there is a great team behind it, (the Red Gate tools are a must have when developing SQL systems, for instance), and I do believe they are in their right to charge this. So.. there you have it. This is of course, my opinion. You may think otherwise. Please let me know in the comments what you think! Tags van Technorati: redgate,reflector,opensource

    Read the article

  • Bose USB audio: crackling popping sound, eventually die

    - by Richard Barrett
    I've been trying to troubleshoot this issue for a while now. Any help would be much appreciated. I'm having trouble getting my Bose "Companion 5 multimedia speakers" working with my installation of Ubuntu 12.04 (link to Bose product here: http://www.bose.com/controller?url=/shop_online/digital_music_systems/computer_speakers/companion_5/index.jsp ). The issue seems to be low level (not just Ubuntu). What happens: When I boot into Ubuntu, I can get Rhythm box to play ok. However, if I try anything else (an .avi file, a webpage, or Clementine player with mp3 files) I get crackling, popping, or choppy sounds. If I move the mouse around, especially if it seems graphic intensive, the problem gets worse (more crackling noises). The more taxing it appears to be, the more likely it is that the sound will just die altogether until I reboot. For some reason the videos at www.bloomberg.com seem especially bad for it (my sound normally goes dead in under 45 seconds and won't work until reboot). Both my desktop running Ubuntu 12.04 and my laptop (running the same) have the same crackling problem. Troubleshooting so far: A friend of mine who knows linux well tried to solve it for me without any luck. He took pulseaudio out of the equation, but still had the problem just using AlSA. Among the many things he tried was adjusting the latency, but that didn't help either. I've also tried things like adjusting the USB device settings in the config file from -2 to -1 so that it will use my USB sound and I also commented out the lines that would stop that. These don't do anything. (That really seems like it's for someone who is getting no sound at all, so it's not surprising this won't work.) My friend's laptop running his Archlinux could play my Bose USB speakers without any problems. I also tried setting my daemon.conf file to use 6 channels (based on this http://lotphelp.com/lotp/configure-ubuntu-51-surround-sound ) but that didn't work either. I recently used a DVD to boot into Ubuntu Studio 12.04 (because it uses a live audio kernel) and this happened: I got perfect sound for a minute or two When I started moving windows around while sound was playing, the sound died again. Perhaps more interesting: There is a headphone out jack on the Bose system. When I use it, the audio is perfect for all applications (even the deadly bloomberg.com videos with .avi playing at the same time and moving around windows). Also, there is an audio-in jack on the Bose system. I can use a male-to-male mini jack to go from my soundcard's output to the Bose input and then all sound works perfectly. -However, it still requires the Bose to be plugged in to USB, otherwise I lose all sound. Any thoughts? Any suggestions for trouble shooting? (Or any suggestions for somewhere else to post to solve this?) Any logs or other files I can provide to help someone help me work this out? Your help is much appreciated! Rick BTW: I sometimes get people posting responses like "My Bose USB system works great with Ubuntu 12.04," without any more details. Is there anything I should ask such people to narrow down my problem? (It's kind of annoying to hear such a response because it doesn't help solve my problem.)

    Read the article

  • Developer Profile: Marcelo Quinta

    - by Tori Wieldt
    As the Java developer community lead for Oracle, the best part of my job is going to conferences and meeting Java developers. I’ve had the pleasure to meet men and women who are smart, fun and passionate about Java—they make the Java community happen. The current issue of Java Magazine provides profiles of other young Java developers around the world. Subscribe to read them! Marcelo Quinta Age: 24Occupation: Professor, Federal University of GoiasLocation: Goias, Brazil Twitter: @mrquinta Marcelo (white polo shirt, center) and class OTN: When did you realize that you were good at programming? When I was in graduate school, I developed a Java system that displayed worked out the logics of getting the maximum coverage using the fewest resources (for example, the minimum number of soldiers [and positions] needed for a battlefield. It may seems not difficult, but it's a hard problem to solve, mathematically. Here I was, a freshman, who came up with an app  "solving" it. Some Master's students use my software today. It was then I began to believe in what I could do.OTN: What most inspires you about programming?I'm really inspired by the challenges and tension that comes from solving a complicated problems. Lately, I've been doing a new system focused on education and digital inclusion and was very gratifying to see it working and the results. I felt useful for the community. OTN: What are some things you would like to accomplish using Java?Java is a very strong platform and that gives us power to develop applications for different devices and purposes, from home automation with little microcontrollers to systems in big servers. I would like to build more systems that integrate the people life or different business contexts, from PCs to cell phones and tablets, ubiquitously. I think IT has reached a level where the current challenge is to make systems that leverage existing technologies that are present in daily life. Java gives us a very interesting set of options to put it into practice, especially in systems that require more strength.OTN: What technical insights into Java technology have been most important to you?I have really enjoyed the way that Java has evolved with Oracle, with new features added, many of them which were suggested by the community. Java 7 came with substantial improvements in the language syntax and it seems that Java 8 takes it even further. I also made some applications in JavaFX and liked the new version. The Java GUI is on a higher level than is offered out there. I saw some JavaFX prototypes running in modern tablets and I got excited. OTN: What would you like to be doing 10 years from now?I want my work to make a difference for individuals or an institution. It would be interesting to be improving one of the systems that I am making today. Recently I've been mixing my hobbies and work, playing with Arduino and home automation. The JHome project, winner of the Duke's Choice Award in 2011, is very interesting to me.OTN: Do you listen to music when you write code? If so, what kind?Absolutely! I usually listen to electronic music (Prodigy, Fatboy Slim and Paul Oakenfold), rock (Metallica, Strokes, The Black Keys) and a bit of local alternative music. I live in Goiânia, "The Brazilian Seattle" and I profit from it very well. OTN: What do you do when you're not programming?I like to play guitar and to fish. Last year I sold my economy car and bought a old jeep. Some people called me crazy, but since then I've been having a great time and having adventures on the backroads of Brazil. Once I broke my glasses in a funny game involving my car's suspension and the airbags. OTN: Does your girlfriend think you are crazy?Crazy is someone who doesn't have courage to do strange things! My girlfriend likes my style. =D Subscribe to the free Java Magazine to read profiles of other young Java developers. Visit the Java channel on YouTube to see a video of Marcelo in action.

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • How to create (via installer script) a task that will install my bash script so it runs on DE startup?

    - by MountainX
    I've been reading for the last couple hours about Upstart, .xinitrc, .xsessions, rc.local, /etc/init.d/, /etc/xdg/autostart, @reboot in crontab and so many other things that I'm totally confused! Here is my bash script. It should start/run after the desktop environment is started and it should continue to run at all times until logout/shutdown. It should start again on reboot. Any time the DE is running, it should run. #!/bin/bash while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt kdialog --title 'The software has been updated' --msgbox "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0 I know zero about using Upstart, but I understand that Upstart is one way to handle this. I'll consider other approaches but most of the things I've been reading about are too complex for me. Furthermore, I can't figure out which approach will meet my requirements (which I'll detail below). There are two steps in my question: How to automatically start the script above, as described above. How to "install" that Upstart task via a bash script (i.e., my "installer"). I assume (or hope) that step 2 is almost trivial once I understand step 1. I have to support all flavors of Ubuntu desktops. Therefore, the kdialog call above will be replaced. I'm considering easybashgui for this. (Or I could use zenity on gnome DE's.) My requirements are: The setup process (installation) must be done via a bash script. I cannot use the GUI method described in the Ubuntu doc AddingProgramToSessionStartup, for example. I must be able to script/automate the setup (installing) process using bash. Currently, it is as simple as having the bash installer script copy the above script into /home/$USER/.kde/Autostart/ The setup process must be universal across Ubuntu derivatives including Unity and KDE and gnome desktops. The same setup script (installer) should run on Linux Mint, Kubuntu, Xbuntu (basically any flavor of Ubuntu and major derivatives such as Linux Mint). For example, we cannot continue to put a script file in /home/$USER/.kde/Autostart/ because that exists only on KDE. The above script should work for each of the limited flavors we use. Hence our interest in using easybashgui instead of kdialog or zenity. See below. The installed monitoring script should only be started after the desktop is started since it will display a GUI message to the user if the update is found. The monitoring script (above) should run without root privileges, of course. But the installer (bash script) can be run as root. I'm not a real developer or a sysadmin. This is a part time volunteer thing for me, so it needs to be easy/simple. I can write bash scripts and I can program a little, but I know nothing about Upstart or systemd, for example. And, unfortunately, my job doesn't give me time to become an expert on init systems or much of anything else related to development and sysadmin. So I have to stick with simple solutions. The easybashgui version of the script might look like this: #!/bin/bash source easybashgui while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt message "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >