Search Results

Search found 7384 results on 296 pages for 'upk team'.

Page 71/296 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • The internal storage of a DATETIME value

    - by Peter Larsson
    SELECT  [Now],         BinaryFormat,         SUBSTRING(BinaryFormat, 1, 4) AS DayPart,         SUBSTRING(BinaryFormat, 5, 4) AS TimePart,         CAST(SUBSTRING(BinaryFormat, 1, 4) AS INT) AS [Days],         DATEADD(DAY, CAST(SUBSTRING(BinaryFormat, 1, 4) AS INT), 0) AS [Today],         CAST(SUBSTRING(BinaryFormat, 5, 4) AS INT) AS [Ticks],         DATEADD(MILLISECOND, 1000.E / 300.E * CAST(SUBSTRING(BinaryFormat, 5, 4) AS INT), 0) AS Peso FROM    (             SELECT  GETDATE() AS [Now],                     CAST(GETDATE() AS BINARY(8)) AS BinaryFormat         ) AS d

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • How to read values from RESX file in ASP.NET using ResXResourceReader

    Here is the method which returns the value for a particular key in a given resource file. Below method assumes resourceFileName is the resource filename and key is the string for which the value has to be retrieved. public static string ReadValueFromResourceFile(String resourceFileName, String key)    {        String _value = String.Empty;        ResXResourceReader _resxReader = new ResXResourceReader(            String.Format("{0}{1}\\{2}",System.AppDomain.CurrentDomain.BaseDirectory.ToString(), StringConstants.ResourceFolderName , resourceFileName));        foreach (DictionaryEntry _item in _resxReader)        {            if (_item.Key.Equals(key))            {                _value = _item.Value.ToString();                break;            }        }        return _value;    } span.fullpost {display:none;}

    Read the article

  • Google I/O 2010 - ?Run corp apps on App Engine? Yes we do.

    Google I/O 2010 - ​Run corp apps on App Engine? Yes we do. Google I/O 2010 - ​Run corporate applications on Google App Engine? Yes we do. App Engine, Enterprise 201 Ben Fried, Irwin Boutboul, Justin McWilliams, Matthew Simmons Hear Google CIO Ben Fried and his team of engineers describe how Google builds on App Engine. If you're interested in building corp apps that run on Google's cloud, this team has been doing exactly that. Learn how these teams have been able to respond more quickly to business needs while reducing operational burden. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 14 0 ratings Time: 55:53 More in Science & Technology

    Read the article

  • Using LINQ Lambda Expressions to Design Customizable Generic Components

    LINQ makes code easier to write and maintain by abstracting the data source. It provides a uniform way to handle widely diverse data structures within an application. LINQ’s Lambda syntax is clever enough to even allow you to create generic building blocks with hooks, into which you can inject arbitrary functions. Michael Sorens explains, and demonstrates with examples. span.fullpost {display:none;}

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • 24 Hours of PASS: Whine, Whine, Whine

    - by Most Valuable Yak (Rob Volk)
    24 Hours of Pass (or 24HOP) is a great program offered by PASS to provide free, online training for anyone who wants to learn more about SQL Server.  They routinely have the best SQL Server presenters available for these sessions, and attract hundreds, perhaps even a thousand attendees from around the world.  This is definitely one of the best things they've started doing in the past few years, and every session I've attended has been excellent. So why am I so grumpy about it? I'm not really, pretty much everything here is a minor annoyance that I can deal with.  However since they're so minor they seem to be things that can be easily corrected and would make the process much better. First off, this is my biggest gripe, the registration page: https://www323.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=lj6378f4fhf5hpdm What grinds my gears about this?  I have to scroll alllllllllllllllllllllllllllll the way to bottom to actually register for the sessions.  This wouldn't be so bad except all the details of the session, including the presenter, is in a separate list at the top.  Both lists contain info the other does not, and scrolling between them to determine "Should I make time to listen to this?  Who is speaking at this time anyway?" is really unnecessary. My preference would be to keep the top list and add the checkboxes and schedule info in separate columns.  This is a full-width design, so there's plenty of space for this data, which is pretty small anyway.  The other huge benefit is halving the size of the page, which improves performance and lowers bandwidth usage considerably.  And if you know HTML/ASP.Net, and you view the page source, you can find PLENTY of other things that can be reduced even further.  (not just viewstate) One nice thing that PASS does is send iCal reminders to your email address so you can accept them to your calendar.  Again, they leave off the presenter in the appointment details, while still duplicating the meeting title in the body.  Sometimes I make decisions based on speaker rather than content (Natalie Portman is reading the Yellow Pages??? I'M THERE!) and having the speaker in the iCal is helpful. Next minor annoyances are the necessity for providing a company name, and the survey questions.  I know PASS needs to market themselves effectively, and they need information to do that, and since this is a free event it's really not worth complaining about, but why ask the survey question twice? (once at registration, once again when joining the LiveMeeting)  Same thing for the company name.  All of this should be tied to email address, so that's all I should need to enter when joining the LiveMeeting. The last one is also minor, but it irks me in this day and age of multiple browsers and the decline of Internet Explorer as a dominant platform.  The registration page was originally created in Visual Studio 2003, and has a lot of IE-specific crud representative of the browser situation of 2003. (IE5 references? really? and is the aforementioned viewstate big enough?)  This causes some grief with other browsers like Firefox, Chrome, and sometimes IE8 or 9.  And don't get me started on using the page on a Mac or in Safari. My main point is that PASS is an international organization, welcoming everyone from all levels of SQL Server proficiency, and in that spirit I think it would help to accommodate a wider range of browser software, especially since the registration page is extremely simple.  I recognize that this page is not hosted on the PASS website and may be maintained by some division of Microsoft, but to me that's even worse if MS can't update their own pages.  They've deprecated IE6, so they don't need to maintain support on their own websites anymore. OK, I'll shut up now. #sqlpass #24HOP

    Read the article

  • Unlock the full potential of Oracle Retail with Oracle Retail Consulting

    - by user801960
    In this video, Maria Porretta, Engagement Director, introduces Oracle Retail Consulting which supports Oracle Retail customers by unlocking the potential of the software solutions they are utilising. Oracle Retail Consulting comprises of a global team of over 300 consultants, 70 of which are EMEA based. 90% of the team have a retail background in either IT or business, ensuring true industry expertise and maximum business benefit. Oracle Retail Consulting offers two primary streams of service; design authority which looks at analysis and design to ensure a guided process through to implementation, and delivery ownership which runs throughout the implementation process. Further information is available on our website regarding Oracle Retail Consulting.

    Read the article

  • How to get current connection settings

    - by Peter Larsson
    SELECT name AS Setting,         CASE             WHEN @@OPTIONS & number = number THEN 'ON'             ELSE 'OFF'         END AS Value FROM    master..spt_values WHERE   type= 'SOP'         AND number > 0

    Read the article

  • How to block the ASP.NET page while ajax UpdateProgress is being displayed.

    Step 1: Copy the following styles to your aspx page. <style type="text/css">       .hide       {           display: none;       }       .show       {           display: inherit;       }        .progressBackgroundFilter       {           position: absolute;           top: 0px;           bottom: 0px;           left: 0px;           right: 0px;           overflow: hidden;           padding: 0;           margin: 0;           background-color: #000;           filter: alpha(opacity=50);           opacity: 0.5;           z-index: 1000;       }       .processMessage       {           position: absolute;           font-family:Verdana;           font-size:12px;           font-weight:normal;           color:#000066;           top: 30%;           left: 43%;           padding: 10px;           width: 18%;           z-index: 1001;           background-color: #fff;       }   </style> Step 2: Put the divs as shown below in UpdateProgress control. <asp:UpdateProgress ID="updPrgsBaselineTab" runat="server">        <ProgressTemplate>            <div id="progressBackgroundFilter" class="progressBackgroundFilter">            </div>            <div id="processMessage" class="processMessage">                <table width="100%">                    <tr style="width: 100%">                        <td style="width: 100%">                            Please Wait..........                        </td>                    </tr>                    <tr style="width: 100%">                        <td style="width: 100%" align="center">                            <img src="../Images/Update_Progress.gif" />                        </td>                    </tr>                </table>            </div>        </ProgressTemplate>    </asp:UpdateProgress> span.fullpost {display:none;}

    Read the article

  • Scheduling Jobs in SQL Server Express

    As we all know SQL Server 2005 Express is a very powerful free edition of SQL Server 2005. However it does not contain SQL Server Agent service. Because of this scheduling jobs is not possible. So if we want to do this we have to install a free or commercial 3rd party product. This usually isn't allowed due to the security policies of many hosting companies and thus presents a problem. Maybe we want to schedule daily backups, database reindexing, statistics updating, etc. This is why I wanted to have a solution based only on SQL Server 2005 Express and not dependent on the hosting company. And of course there is one based on our old friend the Service Broker.

    Read the article

  • Infragistics - Now Available! 2 New Packs and Silverlight CTPs

    Infragistics® glows silver this season as we continue to innovate for the Silverlight 3 platform and deliver stockings stuffed with high performance controls needed to quickly and easily create great user experiences in Silverlight; and two new ICON packs guaranteed to make your applications shine. First Silverlight Pivot Grid Now Available Perfect for working with multi-dimensional data, the xamWebPivotGrid™ presents decision makers with highly-interactive pivoting views of business intelligence. Our new high-performance Silverlight charting control, the xamWebDataChart™ enables blazing fast updates every few milliseconds to charts with millions of data points. Both of these controls are planned for the 2010 Volume 1 release of NetAdvantage for Silverlight Data Visualization. Gift to Silverlight Line of Business Applications You’ll be able to deliver superior user experiences in LOB applications with Silverlight RIA services support, a ZIP compression library, a new control persistence framework, and new Silverlight data grid features like unbound columns and template layouts, plus an Office 2007-style ribbon UI for Silverlight. Tis the Season to Add Some Shine The NetAdvantage ICONS Legal Pack adds a touch of legalese to any application user interface with its rich, legal system-themed graphic icons. The NetAdvantage ICONS Education Pack supplies familiar, academic icons that developers can easily add to software reaching students, educators, schools and universities. Sold in themes Packs, ICON packs that are already available are: Web & Commerce, Healthcare, Office Basics, Business & Finance and Software & Computing. Buy any two of the seven packs for $299 USD (MSRP); 3 packs for $399 USD (MSRP); or sold separately for $199 USD (MSRP) each. For more Product details Contact Infragistics:      +1 (800) 231-8588 In Europe (English Speakers):      +44 (0) 20 8387 1474 En France (en langue française):      +33 (0) 800 667 307 Für Deutschland (Deutscher Sprecher):       0800 368 6381 In India:     +91 (80) 6785 1111 span.fullpost {display:none;}

    Read the article

  • Google I/O 2010 - Android UI design patterns

    Google I/O 2010 - Android UI design patterns Google I/O 2010 - Android UI design patterns Android 201 Chris Nesladek, German Bauer, Richard Fulcher, Christian Robertson, Jim Palmer In this session, the Android User Experience team will show the types of patterns you can use to build a great Android application. We'll cover things like how to use Interactive Titlebars, Quick Contacts, and Bottom bars as well some new patterns which will get an I/O-only preview. The team will be also available for a no holds barred Q&A session. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 58:42 More in Science & Technology

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1)Singleton We can get the object of singleton and then pass to other methods.Static Class We can not pass static class to other methods as we pass objectsPoint 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed.Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use.Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • Interviews: Going Beyond the Technical Quiz

    - by Tony Davis
    All developers will be familiar with the basic format of a technical interview. After a bout of CV-trawling to gauge basic experience, strengths and weaknesses, the interview turns technical. The whiteboard takes center stage and the challenge is set to design a function or query, or solve what on the face of it might seem a disarmingly simple programming puzzle. Most developers will have experienced those few panic-stricken moments, when one’s mind goes as blank as the whiteboard, before un-popping the marker pen, and hopefully one’s mental functions, to work through the problem. It is a way to probe the candidate’s knowledge of basic programming structures and techniques and to challenge their critical thinking. However, these challenges or puzzles, often devised by some of the smartest brains in the development team, have a tendency to become unnecessarily ‘tricksy’. They often seem somewhat academic in nature. While the candidate straight out of IT school might breeze through the construction of a Markov chain, a candidate with bags of practical experience but less in the way of formal training could become nonplussed. Also, a whiteboard and a marker pen make up only a very small part of the toolkit that a programmer will use in everyday work. I remember vividly my first job interview, for a position as technical editor. It went well, but after the usual CV grilling and technical questions, I was only halfway there. Later, they sat me alongside a team of editors, in front of a computer loaded with MS Word and copy of SQL Server Query Analyzer, and my task was to edit a real chapter for a real SQL Server book that they planned to publish, including validating and testing all the code. It was a tough challenge but I came away with a sound knowledge of the sort of work I’d do, and its context. It makes perfect sense, yet my impression is that many organizations don’t do this. Indeed, it is only relatively recently that Red Gate started to move over to this model for developer interviews. Now, instead of, or perhaps in addition to, the whiteboard challenges, the candidate can expect to sit with their prospective team, in front of Visual Studio, loaded with all the useful tools in the developer’s kit (ReSharper and so on) and asked to, for example, analyze and improve a real piece of software. The same principles should apply when interviewing for a database positon. In addition to the usual questions challenging the candidate’s knowledge of such things as b-trees, object permissions, database recovery models, and so on, sit the candidate down with the other database developers or DBAs. Arm them with a copy of Management Studio, and a few other tools, then challenge them to discover the flaws in a stored procedure, and improve its performance. Or present them with a corrupt database and ask them to get the database back online, and discover the cause of the corruption.

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Starting Spring Lineup

    - by onefloridacoder
    This morning I finished removing all VS2008 related frameworks and installed items related to VS2010 based on posts around the community.  Here’s what I started with on my dev laptop, the config for my laptop:  HP Pavilion dv6  Win7 64-bit 4Gb RAM Installed Developer Tools and Frameworks: Sync 2.0 SDK Visual Studio 2010 Pex Power Tools Enterprise Library 5.0 SQL Server 2008 Developer Edition Visual Studio 2008 Ultimate Expression Blend 4 RC Team Foundation Server 2010 Team Foundation Server 2010 SDK   The only item I did not reinstall on top of VS2010 was ReSharper 4.5 only because I read mixed reviews on the dev experience.  At this point I really just want to get use to the new(er) IDEs without adding any confusion to my dev machine.  I’ll level off my desire for early adoption at Blend, EntLib 5.0, and Pex - I’m also interested in the Moles integration as well.   Something else I didn’t have to install was my IDE theme which was left behind in my user folder was merged during installation – afterwards it was nice to see once the dust settled.

    Read the article

  • TDD with limited resources

    - by bunglestink
    I work in a large company, but on a just two man team developing desktop LOB applications. I have been researching TDD for quite a while now, and although it is easy to realize its benefits for larger applications, I am having a hard time trying to justify the time to begin using TDD on the scale of our applications. I understand its advantages in automating testing, improving maintainability, etc., but on our scale, writing even basic unit tests for all of our components could easily double development time. Since we are already undermanned with extreme deadlines, I am not sure what direction to take. While other practices such as agile iterative development make perfect since, I am kind of torn over the productivity trade-offs of TDD on a small team. Are the advantages of TDD worth the extra development time on small teams with very tight schedules?

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Testing with Profiler Custom Events and Database Snapshots

    We've all had them. One of those stored procedures that is huge and contains complex business logic which may or may not be executed. These procedures make it an absolute nightmare when it comes to debugging problems because they're so complex and have so many logic offshoots that it's very easy to get lost when you're trying to determine the path that the procedure code took when it ran. Fortunately Profiler lets you define custom events that you can raise in your code and capture in a trace so you get a better window into the sub events occurring in your code. I found it very useful to use custom events and a database snapshot to debug some code recently and we'll explore both in this article. I find raising these events and running Profiler to be very useful for testing my stored procedures on my own as well as when my code is going through official testing and user acceptance. It's a simple approach and a great way to catch any performance problems or logic errors.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >