Search Results

Search found 20062 results on 803 pages for 'malcolm post'.

Page 219/803 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • SQL Server 2008 Cumulative Updates are available!

    - by AaronBertrand
    SQL Server 2008 Service Pack 2 Cumulative Update #9 KB article : http://support.microsoft.com/kb/2673382 Build number is 10.00.4330 7 fixes (5 in database engine, 2 in SSAS) Relevant for : Builds of SQL Server between 10.00.4000 and 10.00.4329 SQL Server 2008 Service Pack 3 Cumulative Update #4 KB article : http://support.microsoft.com/kb/2673383 Build number is 10.00.5775 10 fixes listed Relevant for : Builds of SQL Server between 10.00.5000 and 10.00.5774 As usual, I'll post my standard disclaimer...(read more)

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Open Source AI Bot interfaces

    - by David Young
    What are some open source AI Bot interfaces? Similar to Pogamut 3 GameBots2004 for custom Unreal Tournament bots or Brood Wars API for Starcraft bots etc. If you could please post one AI bot interface per answer (make sure to provide a link) and give a brief summary as to the content of the blog posts. Please include what type of bot interface structure it is, client/server, server/server, etc e.g. BWAPI is client/server which emulates a real player

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Criação do Blog

    - by Wesley Faria
    Olá pessoal, Esse é o meu primeiro post no blog e nesse primeiro contato gostaria de Convitar todos a participar no novo blog voltado para tecnologia Solaris e Sparc. O meu intuito quando criei o Blog era fazer uma ponte entre esse fantástico Sistema Operacional e o publico que quer saber um pouco mais onde e como o Solaris pode facilitar a sua vida. Em breve estarei postando dicas, notícias, links e tutoriais.

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Link Model Objects Across Designs

    - by thatjeffsmith
    The third post in our “What’s New in SQL Developer Data Modeler v3.3” series, SQL Developer Data Modeler now allows you to link objects across models. If you need to catch up on the earlier posts, here are the first two: New and Improved Search Collaborative Design via Excel Today’s post is a very simple and straightforward discussion on how to share objects across models and designs. In previous releases you could easily copy and paste objects between models and designs. Simply select your object, right-click and select ‘Copy’ Once copied, paste it into your other designs and then make changes as required. Once you paste the object, it is no longer associated with the source it was copied from. You are free to make any changes you want in the new location without affecting the source material. And it works the other way as well – make any changes to the source material and the new object is also unaffected. However. What if you want to LINK a model object instead of COPYING it? In version 3.3, you can now do this. Simply drag and drop the object instead of copy and pasting it. Select the object, in this case a relational model table, and drag it to your other model. It’s as simple as it sounds, here’s a little animated GIF to show you what I’m talking about. Drag and drop between models/designs to LINK an object Notes The ‘linked’ object cannot be modified from the destination space Updating the source object will propagate the changes forward to wherever it’s been linked You can drag a linked object to another design, so dragging from A - B and then from B - C will work Linked objects are annotated in the model with a ‘Chain’ bitmap, see below This object has been linked from another design/model and cannot be modified. A very simple feature, but I like the flexibility here. Copy and paste = new independent object. Drag and drop = linked object.

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • CryptographicException: The handle is invalid.

    - by Wil Peck
    More than once I have come across the issue where we have had a problem using an X509Cert from the certificate store.  Everything is configured properly in the certificate store but when we attempt to create the signature we end up with a cryptographic exception for no apparent reason. See CryptographicException: The handle is invalid post by Benoit Martin explains the problem and shows how this issue can be resolved. Technorati Tags: Exceptions,Help,Cryptography

    Read the article

  • Download ASP.NET MVC Source Code

    - by Editor
    From Scott Guthrie’s blog: Last month I blogged about our ASP.NET MVC Roadmap. Two weeks ago we shipped the ASP.NET Preview 2 Release. Phil Haack from the ASP.NET team published a good blog post about the release here. Scott Hanselman has created a bunch of great ASP.NET MVC tutorial videos [...]

    Read the article

  • Using polygons instead of quads on Cocos2d

    - by rraallvv
    I've been looking under the hood of Cocos2d, and I think (please correct me if I'm wrong) that although working with quads is a key feature of the engine, it should't be dificult to make it work with arrays of vertices (aka polygons) instead of quads, being the quads a special case of an array of four vertices by the way, does anyone have any code that makes cocos2d render a texture filled polygon inside a batch node? the code posted here (http://www.cocos2d-iphone.org/forum/topic/8142/page/2#post-89393) does a nice job rendering a texture filled polygon but the class doesn't work with batch nodes

    Read the article

  • Does tempdb Get Recreated From model at Startup?

    - by Jonathan Kehayias
    In my last post Does the tempdb Log file get Zero Initialized at Startup? I questioned whether or not tempdb is actually created from the model database or not at startup.  There is actually an easy way to prove that this statement, at least internally to the tempdb database is in fact TRUE.  Many thanks go out to Bob Ward (Blog | Twitter) for pointing this out after trading emails with him. To validate that tempdb is actually copied at startup from the model database, all that is necessary...(read more)

    Read the article

  • ColdFusion Search with Excel Sheet creation

    - by user71602
    I created a form search with two buttons - one for results on the web page and one to create an excel sheet. Problem: When a user conducts a search with results on the web page - the user would have to select the criteria again then click the create excel button to create the sheet. How can I work it, so the user after conducting a search can just click one button to create the excel sheet? (Using "Post" for the form)

    Read the article

  • Tagged: 5 things SQL Server should drop

    - by AaronBertrand
    I was tagged by Paul Randal ( blog | twitter ) last night in his latest blog post, entitled, " What 5 things should SQL Server get rid of? " His top 5 pretty much coincide with my top 5, so I'll have to dig a little deeper. In no particular order: Syntax inconsistencies This isn't really a specific thing that Microsoft should get rid of, but rather an attitude and overall approach to SQL Server's long-term development. Every time they add a feature or option to SQL Server, it seems to be implemented...(read more)

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • Geek City: Growing Rows with Snapshot Isolation

    - by Kalen Delaney
    I just finished a wonderful week in Stockholm, teaching a class for Cornerstone Education. We had 19 SQL Server enthusiasts, all eager to find out everything they could about SQL Server Internals. One questions came up on Thursday that I wasn’t sure of the answer to. I jokingly told the student who asked it to consider it a homework exercise, but then I was so interested in the answer, I try to figure it out myself Thursday evening. In this post, I’ll tell you what I did to try to answer the question....(read more)

    Read the article

  • Don't even believe SSMS when you think it's telling the truth

    - by fatherjack
    LiveJournal Tags: how To,ssms,tips and tricks,tsqlThis is in follow up to my last post How to make sure you see the truth in Management Studio which explained that the time that you see at the bottom of a Management Studio window cannot be believed to represent the time it takes for a query to execute as it also includes the time taken for SSMS to receive and format the data grid. Now, very soon after that went live I received a comment from Dave Ballantyne (Blog | Twitter) mentioning that having...(read more)

    Read the article

  • Can I use my computer as an A2DP receiver?

    - by Pierre-Yves Gillier
    First, the problem: I'm using a Cowon MP3 player as my main music player with basic earplugs. It offers A2DP & I'd like to have my netbook (running UNE 10.04) act as receiver. Some resources on the web about a2dp, but most are out of date: http://fosswire.com/post/2008/10/better-bluetooth-audio/ http://jprvita.wordpress.com/2009/12/15/1-2-3-4-a2dp-stream/ http://dpillay.wordpress.com/2010/09/27/ubuntu-10-04-a2dp-awesome-headset-music/

    Read the article

  • Responsive Design: Media query fix for IE10 on Windows Phone 8

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2013/07/01/responsive-design-media-query-fix-for-ie10-on--windows.aspxThe version of IE10 on Windows Phone 8 apparently has a bug which results in media queries not seeing the correct device width.This post from Devhammer explains all.http://devhammer.net/responsive-design-fix-for-windows-phone-8-device-adaptationI'd not noticed this on the WP8 Emulator which proves yet again that testing on real devices is essential.

    Read the article

  • Introduction à l'Unreal Development Kit avec Visual Studio, un article par Aymeric SUTEAU

    Bonjour à toutes et à tous, Via ce post, je vous annonce la mise en ligne d'un premier tutoriel concernant l'UDK ou Unreal Development Kit. L'objectif de ce tutoriel est de prendre en main l'UDK sous environnement Visual Studio, afin de créer, configurer et exécuter un projet basé sur le framework Unreal Engine 3. Voici le lien vers l'article en question : Introduction à UDK avec Visual Studio N'hésitez pas à faire vos retours concernant l'article sur cette discussion. Bonne lecture ...

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • adding custom SSIS transformation to visual studio toolbox fails

    - by ryangaraygay
    Just very recently I encountered an issue in deploying a custom SSIS component assembly which turns out to be a relative "no-brainer" error if only the clues were more straightforward. Basically after deploying the assembly I could not find my component listed in the "SSIS Data Flow Items" tab list.It turns out that the assembly containing the component just had missing or referenced the incorrect assemblies.I have outlined the steps I took that guided me on the right direction on this blog post of mine : adding custom SSIS transformation to visual studio toolbox fails 

    Read the article

  • Oracle Open World 2012 is Here!

    - by thatjeffsmith
    Just a quick post today and then probably not much more until next week. Speaking, running hands on labs, meets and greets, and trying to keep up with folks like @oraclenerd means I won’t have much time to write until I get home from San Francisco. Wanted to give a quick shout out to my co-worker and partner-in-Product Management-crime, Ashley Chen this morning. She signed me up for a run across the Golden Gate and back with @bamcgill a few months ago…mostly with my permission. The only thing was, I didn’t run at the time, and that’s basically a 5k. But having goals is good. And yesterday I met a big goal of mine – not looking stupid trying to run across the Golden Gate Bridge. Ok, I did the run and mabye looked a little bit stupid. Ashley, Barry, and I Pre-Run Perfect weather and no fog to cloud the view! So the pre-show fun is over and now it’s time for the show fun to begin. At Oracle Open World? Come by our demo pods. We’re with the other Database folks in the back right-hand corner. We’ll have folks on hand to talk and show Oracle SQL Developer, Oracle SQL Developer Data Modeler, Migrations, and Oracle APEX Listener. Oracle SQL Developer Demo Pod I have the full schedule of SQL Developer presentations and hands on labs here. I know there’s a lot of news on tap this week in the world of Oracle, and we’ll start talking more about it soon. So be sure to subscribe to my feed if you don’t want to miss any of my posts. And I promise not to post any more pictures me. Speaking of pictures, thanks to @dmcghan – or as I call him, ‘Dan the Man’ for running with us and being our official portrait photographer! If you don’t follow him, he’s a great fountain of knowledge in the Oracle APEX world and is one of our ACEs.

    Read the article

  • New Version of JMS Whitepaper

    - by ACShorten
    In the last post, I mentioned that the JMS functionality was extended to include support for JMS Topics and JMS Selectors etc. The Oracle WebLogic JMS Integration and Oracle Utilities Application Framework (Doc Id 1308181.1), from My Oracle Support, has been updated to include implementation details of each of these extensions as well as new advice in implementing JMS.  The whitepaper is now available.

    Read the article

  • Separating a "wad of stuff" utility project into individual components with "optional" dependencies

    - by romkyns
    Over the years of using C#/.NET for a bunch of in-house projects, we've had one library grow organically into one huge wad of stuff. It's called "Util", and I'm sure many of you have seen one of these beasts in your careers. Many parts of this library are very much standalone, and could be split up into separate projects (which we'd like to open-source). But there is one major problem that needs to be solved before these can be released as separate libraries. Basically, there are lots and lots of cases of what I might call "optional dependencies" between these libraries. To explain this better, consider some of the modules that are good candidates to become stand-alone libraries. CommandLineParser is for parsing command lines. XmlClassify is for serializing classes to XML. PostBuildCheck performs checks on the compiled assembly and reports a compilation error if they fail. ConsoleColoredString is a library for colored string literals. Lingo is for translating user interfaces. Each of those libraries can be used completely stand-alone, but if they are used together then there are useful extra features to be had. For example, both CommandLineParser and XmlClassify expose post-build checking functionality, which requires PostBuildCheck. Similarly, the CommandLineParser allows option documentation to be provided using the colored string literals, requiring ConsoleColoredString, and it supports translatable documentation via Lingo. So the key distinction is that these are optional features. One can use a command line parser with plain, uncolored strings, without translating the documentation or performing any post-build checks. Or one could make the documentation translatable but still uncolored. Or both colored and translatable. Etc. Looking through this "Util" library, I see that almost all potentially separable libraries have such optional features that tie them to other libraries. If I were to actually require those libraries as dependencies then this wad of stuff isn't really untangled at all: you'd still basically require all the libraries if you want to use just one. Are there any established approaches to managing such optional dependencies in .NET?

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >