Search Results

Search found 35094 results on 1404 pages for 'post build'.

Page 722/1404 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Oracle Announces Oracle Cloud Office and Oracle Open Office 3.3

    - by Paulo Folgado
    Oracle today introduced Oracle Cloud Office and Oracle Open Office 3.3, two complete, open standards-based office productivity suites for the desktop, web and mobile devices - helping users significantly improve productivity, reduce costs and achieve greater innovation across the enterprise.Oracle Cloud Office 1.0 is a web and mobile office suite that enables web 2.0-style collaboration and mobile document access. Compatibility with Microsoft Office and integration with Oracle Open Office enable rich and seamless offline editing of complex presentations, text and spreadsheet documents. Oracle Open Office 3.3 includes new enterprise connectors to Oracle Business Intelligence, Oracle E-Business Suite, other Oracle Applications and Microsoft Sharepoint, to allow for fast, seamless integration into existing enterprise software stacks. In addition, it adds increased stability, compatibility and performance at up to five times lower license cost compared to Microsoft Office. Based on the Open Document Format (ODF) and open web standards, Oracle Office enables users to share files on any system as it is compatible with both legacy Microsoft Office documents and modern web 2.0 publishing. The Oracle Office APIs and open standards-based approach provides IT users with flexibility, lower short and long-term costs and freedom from vendor lock-in - enabling organizations to build a complete Open Standard Office Stack. If you're interested to learn more, read our today's press release or visit oracle.com/office.

    Read the article

  • Oracle Announces Oracle Cloud Office and Oracle Open Office 3.3

    - by Harald Behnke
    Oracle today introduced Oracle Cloud Office and Oracle Open Office 3.3, two complete, open standards-based office productivity suites for the desktop, web and mobile devices - helping users significantly improve productivity, reduce costs and achieve greater innovation across the enterprise.(View image)Oracle Cloud Office 1.0 is a web and mobile office suite that enables web 2.0-style collaboration and mobile document access. Compatibility with Microsoft Office and integration with Oracle Open Office enable rich and seamless offline editing of complex presentations, text and spreadsheet documents. Oracle Open Office 3.3 includes new enterprise connectors to Oracle Business Intelligence, Oracle E-Business Suite, other Oracle Applications and Microsoft Sharepoint, to allow for fast, seamless integration into existing enterprise software stacks. In addition, it adds increased stability, compatibility and performance at up to five times lower license cost compared to Microsoft Office. Based on the Open Document Format (ODF) and open web standards, Oracle Office enables users to share files on any system as it is compatible with both legacy Microsoft Office documents and modern web 2.0 publishing. The Oracle Office APIs and open standards-based approach provides IT users with flexibility, lower short and long-term costs and freedom from vendor lock-in - enabling organizations to build a complete Open Standard Office Stack. If you're interested to learn more, read our today's press release or visit oracle.com/office.

    Read the article

  • AVerMedia A309-B mini-PCI to AVerMedia A317 Mini PCI card

    - by Chris
    I got an HP pavilion hdx 16 1060ED laptop (Windows Vista) with a a DVB-T tuner card Now I would like a hybrid or analog turner card in it. According to the HP data of a more expensive variant, a AVerMedia A317 Mini PCI card installed is installed. My system has a AVerMedia A309-B mini-PCI placed in the system. my questions: 1 - is it possible to replace it with a expensive one? (AVerMedia A317 Mini PCI card) and 2 - what will this cost? 3 - I can build it myself and what can I do with the old card I like to hear from you.

    Read the article

  • Webby Nominations for Retail

    - by David Dorf
    The Webby Awards were created back in 1996 when the internet was just a baby. This is their 14th year of highlighting excellence on the Web, and there are lots of great nominations. Its quite amazing to see the rich content and interactivity of today's websites. Some interesting nominees for this year are: Sephora did a campaign at Christmas, and what remains of the Sephora Clause website is a bunch of wishes. The Starbucks "All you need is Love" campaign has lots of cool videos. The Sound Check from Walmart highlights raising music artists. Refinery29 has their fashion info hub. The five nominees in the retail category are: Bugaboo.com's website for selling high-end baby strollers. If you're looking for a high-end bag, check out Crumpler's flash-based e-commerce site. It's highly interactive, but a little on the slow side. I Make My Case sells custom designed iPod/iPhone and Blackberry cases. At MOO.com, they love to print. Tons of art for printing customized business cards, post cards, etc. If you want light shoes, check out Puma L.I.F.T. and see just how light shoes can weigh. Check them out, cast a few votes, and see if you're inspired to create something even better.

    Read the article

  • Static IP on FEDORA12 from Virtualbox

    - by Diego Castro
    I'm trying to get my FEDORA12 to have an STATIC IP - inside virtualbox - inside Ubuntu Let me rephrase that. I have an Ubuntu 9.04 system with vitualbox and a FEDORA12 vm there and I would like to put the fedora with an STATIC IP (amahi needs it), but I'm getting stuck... I'm using NAT (if that's any help) I tryid a few tutorials, but no go. I'm kind of new to the *nix world but I'm old school on M$ Edit: Screenshots UBUNTU 9.04 (host that has the vm) hxxp://pic.imagefap.com/images/full/43/154/1548751086.jpg FEDORA hxxp://pic.imagefap.com/images/full/43/205/2050216515.jpg hxxp://pic.imagefap.com/images/full/43/118/1182276176.jpg (sory cant post pics... not enough rep)

    Read the article

  • MySQL error: Can't find symbol '_mysql_plugin_interface_version_' in library

    - by Josh Smith
    The boring, necessary details: I'm on Snow Leopard running MySQL locally. I'm trying to install the Sphinx engine for MySQL like so: mysql> install plugin sphinx soname 'sphinx.so'; ERROR 1127 (HY000): Can't find symbol '_mysql_plugin_interface_version_' in library I've Googled everywhere and can't seem to find an actual solution to this problem. For example this issue on the Sphinx forums seems unresolved. Someone else also raised this issue with similar results. The first post linked to an O'Reilly article which says: There is a common problem that might occur at this point: ERROR 1127 (HY000): Can't find symbol '_mysql_plugin_interface_version_' in library If you see a message like this, it is likely that you forgot to include the -DMYSQL_DYNAMIC_PLUGIN option when compiling the plugin. Adding this option to the g++ compile line is required to create a dynamically loadable plug-in. But the article ends on that point; I have no idea what this means or how to resolve the issue.

    Read the article

  • Upgrading Sharepoint MOSS 2007 Farm to Sharepoint 2010 "waiting to get a lock to upgrade the farm"

    - by Wes Weeks
    My first inplace upgrade of a MOSS 2007 farm to sharepoint went pretty smooth. I read the preupgrade documentation and was comfortable with the steps.  Since it was a fairly new installation of Moss changes were minimal and I wasn't anticipating too many problems The one issue I got was after installing the software on all of the farm.  I went to the first machine which ran Sharepoint 2010 central administration and ran the Sharepoint 2010 Products Configuration Wizard.  I received the message that I would need to run the configuration on each server in the farm.  Fair enough, I expected as much. The wizard completed without issue on the first server, but when I tried to run it on the others it hung with a "waiting to get a lock to upgrade the farm" message.  It hung for about 10 minutes and then the wizard failed.  Did a few searches on Google and Bing and got 0 results for that message.  None, Nothing, Zilch.  I'm on my own... For grins, hit the help button on the configuration wizard and it seemed to indicate that the configuration wizard needed to be run on all farm servers simultaneously.  I started it again on the first server to the point I got the message about needing to be run on all servers on the farm and then started the wizard on the other servers and ran it to that point as well.  I then clicked ok on the first server and then the subsuquent servers. It took a while and it did hang on the lock message for some time, but then it did kick off and completed succesfully on all of them.  Yeah! Hope this helps someone else!  Now there should be at least one post with this error message on it!

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • CodePlex Daily Summary for Thursday, February 25, 2010

    CodePlex Daily Summary for Thursday, February 25, 2010New ProjectsAptusSoftware.Threading: AptusSoftware.Threading is a class library designed primarily to assist in the development of multi-threaded WinForm applications, although there i...AxiomGameDesigner: It is going to be a universal scene editor for Axiom 3D game engine. It is in pure C# and will be kept portable to MONO for compatibility with linu...Badger - Unity Productivity Extensions: A set of Microsoft Unity Extensions. Why Badger? Because I love badgers.Business & System Analysis Templates and Best Practices for Russian: http://saway.codeplex.com/Conectayas: Conectayas is an open source "Connect Four" alike game but transformable to "Tic-Tac-Toe" and to a lot of similar games that uses mouse. Written in...FastCode: .NET 3.5 Extensions set to increase coding speed.Hundiyas: Hundiyas is an open source "Battleship" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. This cross-platform and cro...Icelandic Online Banking: Icelandic Online Banking is project defining a web service interface for online banking.IE8 AddOns XML Creator: Application that helps on creating the xml files for IE8 Accelerators, Search Providers and the markup for Web Slices.iKnowledge: a asp.net mvc demoLearn ASP.NET MVC: Learn ASP.NET MVC is a project for the members of the Peer Learning group in Silicon Valley. It contains the SportsStore solution from the Pro ASP...Live at Education Meta Web-Service: Live at Education Meta Web-Service is intended to abstract from several technologies that are included in Live@edu set of services. This web-ser...Low level wave sound output for VB.NET: Low level sound output class for VB.NET using platform invocation services to call winmm.dllMailQ: MailQ makes it easier for developers to send mail messages from an application. The system sends mails based on a database queue system (store, se...Managed DXGI: Managed DXGI library is Fully managed wrapper writen on C# for DXGI 1.0 and 1.1 technology. It makes easier to support DXGI in managed application....Multivalue AutoComplete WinForms TextBox in C#: This project is a sample application that demonstrates how to create a multivalue WinForms textbox in C# using .NET Framework 3.5.Nifty CSharp Tools: Nifty CSharp Tools, will contain various tools and snippets. IRCBot, splashscreens, linq, world of warcraft log parsing, screenshot uploaders, twi...PHP MPQ: A port of StormLib to PHP for handling Blizzard MPQ files.RedDevils strategy - Project Hoshimi Programming Battle: Source Code of RedDevils strategy. Imagine Cup 2008 - Project Hoshimi Programming Battle.RNUNIT: rNunit is a distributed Nunit project. Many application these days are client-server application, distributed application and regular unit testing ...Samar Solution: Samar Solutions is a business system for office automation.Silverlight OOMRPG Game Engine: Silverlight OOMRPG Game EngineSimulator: GPSSimulatorSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit is a flexible Augmented Reality library for Silverlight with the aim to make real time Augmented Reality applications with Silverlight ...Spiral Architecture Driven Development (SADD) for Russian: Это русская версия сайта sadd.codeplex.comSQLSnapshotManager: Easily manage SQL Server database snapshots in a easy to use visual interface.Twilio with VB.NET MVC: Twilio with VB.NET MVC is a sample application for developing with Twilio's REST based telephony API. It includes an XML Schema of the TwiML respon...Ultra Speed Dial: UltraSpeedDial.com - Online Speed Dial Page.Visual HTML Editor justHTML: justHTML - is simle windows-application WYSIWYG editor that allow everyone - without any knowledge of HTML - to create and edit web-pages. It supp...WinMTR.NET: .NET Clone of the popular Windows clone of the popular Linux Matt's TracerouteWPF Dialogs: "WPF Dialogs" is a library for different Dialogs in WPF (e.g. FolderBrowseDialog, SaveFileDialog, OpenFileDialog etc.). These Dialogs are written i...WPFLogin: A small Login window in WPF and C#XNA PerformanceTimers: CPU Timers for Windows and Xbox360. Can track multiple threads, and presents output as a log on-screen.New ReleasesAptusSoftware.Threading: 2.0.0: First public release. This release is in production as part of several commercial applications and is stable. The source code download includes a...BizTalk Software Factory: BizTalk Software Factory v2.1: This is a service release for the BizTalk Software Factory for BizTalk Server 2009, containing so far: Fix for x64: the SN.EXE tool is now locate...Business & System Analysis Templates and Best Practices for Russian: R00 The Place reserver: Just to reserve the place Will be filled out soonChronos WPF: Chronos v1.0 Beta 2: Added a new SplashScreen Added a new Login View and implemented Log Off Added a new PasswordBoxHelper (http://www.codeproject.com/Articles/371...dotNetTips: dotNetTips.Utility 3.5 R2: This is a new release (version 3.5.0.3) compatible with .NET 3.5. Lots of new classes/features!! Requires SP1 if using the Entity Framework extensi...fleXdoc: template-based server-side document generator (docx): fleXdoc 1.0 beta 3: The third and final beta of fleXdoc. fleXdoc consists of a webservice and a (test)client for the service. Make sure you also download the testclien...FluentPS: FluentPS v1.0: - FluentPS is moved from ASMX to WCF interface of the Project Server Interface (PSI) - Impersonation changes to work in compliance with WCF interfa...FolderSize: FolderSize.Win32.1.0.4.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...iTuner - The iTunes Companion: iTuner 1.1.3707 Beta 3: As promised, the iTuner Automated Librarian is now available. This automatically cleans an entire album of dead tracks and duplicates as tracks ar...Live at Education Meta Web-Service: LAEMWS v 1.0 beta: Release Candidate for LAEMWS.Macaw Reusable Code Library: LanguageConfigurationSolution: This Solution helps developing a multi language publishing web siteManaged DXGI: Initial Release.: Base declaration of interfaces, most of them untested yet.Math.NET Numerics: 2010.2.24.667 Build: Latest alpha buildMiniTwitter: 1.08.1: MiniTwitter 1.08.1 更新内容 変更 インクリメンタル検索時には大文字小文字の区別をしないように変更 クライアント名の表示を本家にあわせて from から via に変更 修正 公式 RT 時にステータスが上に表示されたり二重に表示されるバグを修正 自分が自分へ返信...Multivalue AutoComplete WinForms TextBox in C#: 1.0 First public release: Multivalue autocomplete textbox control and host application in this release are released in a single Visual Studio 2008 projects. See my related b...NMock3: NMock3 - Beta3, .NET 3.5: This release has some exciting new features. Please start providing feedback on the tutorials. The first several are complete and the rest are no...nxAjax - an asp.net ajax library using jQuery: nxAjax v3 codeplex 7: nxAjax v3 codeplex 7 binary and test website. Bug Fixed: ajax:Form control Add: Drag and drop Rewritten: DragnDropManager DragPanel DropPan...Office Apps: 0.8.7: whats new? Document.Editor and Document.Viewer now supports FlowDocument (.xaml) files bug fix'sPDF Rider: PDF Rider 0.3: Application PrerequisitesMicrosoft Windows Operating Systems (XP (tested) - Vista - 7) Microsoft .NET Framework 3.5 runtime A PDF rendering sof...ShellLight: ShellLight 0.1.0.1 Src: Codeplex project released. This is only a preview of the product. Until the first final release there will be many improvements.Silverlight OOMRPG Game Engine: SilverlightGameTutorialSolution v1.01: Please visit my blog for Silverlight OOMROG Game Tutorial: http://www.cnblogs.com/Jax/archive/2010/02/24/1673053.html.Simple Savant: Simple Savant v0.4: Added support for full-text indexing (See Full-Text Indexing) Added support for attribute spanning and compression for property values larger tha...Spiral Architecture Driven Development (SADD) for Russian: R00: R00 to reserve site nameTeamReview - TFS Code Review: Release 1.1.3: Release Features New expanded product positioning for capturing any targeted coding work as a trackable, assignable, reportable Work Item for any r...Text Designer Outline Text Library: 10th minor release: Version 0.3.1 (10th minor release)Fixed the gradient brush being too big for the text, resulting in not much gradient shown in the text. Gradient...TFS Workflow Control: TeamExplorer and TSWA control 1.0 for TFS 2010 RC: This is a special version for TFS 2010 RC. Use the RC version of the power tools to modify the layout of your work items (http://visualstudiogaller...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.7) - VS2010 RC Support: This update adds support for Visual Studio 2010 RC in addition to Visual Studio 2008. Please note that Visual Studio 2010 Beta 2 is NOT supported a...Tumblen3: tumblen3 Version 25Feb2010: ready for Twitter's xAuthUMD文本编辑器: UMDEditor文本编辑器V2.1.0: 2.1.0 (2010-02-24) 增加查找章节内指定文本内容的功能 2.0.4 (2010-02-06) 章节内容框增加右键菜单,包含编辑文本的基本操作 ------------------------------------------------------- 执行 reg.bat ...VCC: Latest build, v2.1.30224.0: Automatic drop of latest buildVisual HTML Editor justHTML: Latest binary: Latest buid here. Executable and mshtml.dll included in this archive. Ready to use ;)Visual HTML Editor justHTML: Source code for version 2.5: Visual studio 2008 project with full source code.VOB2MKV: vob2mkv-1.0.2: The release vob2mkv-1.0.2 is a feature update of the VOB2MKV project. It now includes a DirectShow source filter, MKVSOURCE. A source filter allo...WinMTR.NET: V 1.0: V 1.0WPF Dialogs: Version 0.1.0: Version 0.1.0 FolderBrowseDialog is implementet for more information look here Version 0.1.0 (german: Version 0.1.0 - Deutsch).WPF Dialogs: Version 0.1.1: Version 0.1.1 Features FolderBrowseDialog was extended / FolderBrowseDialog - Deutsch wurde erweitertXNA PerformanceTimers: XNA PerformanceTimers 0.1: Initial release.Zeta Resource Editor: Release 2010-02-24: Added HTTP proxy server support.Most Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesOxiteMost Active ProjectsDinnerNow.netBlogEngine.NETRawrInfoServiceSLARToolkit - Silverlight Augmented Reality ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web ServicesRapid Entity Framework. (ORM). CTP 2Common Context Adapters

    Read the article

  • How to install and use Tally ERP 9 in Wine

    - by askmish
    I am trying to run Tally ERP 9 (http://www.tallysolutions.com/), post-installation in wine 1.7. I am getting an "out of memory" error, the moment it starts(at the tally splash screen). I then have no option other than to close it. OS: Xubuntu 12.04.3 Wine: 1.6\1.7.9 How have I installed: Inserted the Tally Installation CD, clicked on the tally installer executable. Installed by following the setup. Configured for multi-user, client configuration. Server is at a specified LAN address running on Windows. Then once installation is done. Clicked on the executable/ran from command line for tally.exe Please guide.

    Read the article

  • importing a VCard in the address book , objective C [migrated]

    - by user1044771
    I am designing a QR code reader, and it needs to detect and import contact cards in vCard format. is there a way to add the card data to the system Address Book directly, or do I need to parse the vCard myself and add each field individually? I will be getting the VCArd in a NSString format I tried the code below (from a different post) and didn't work -(IBAction)saveContacts{ NSString *vCardString = @"vCardDataHere"; CFDataRef vCardData = (__bridge_retained CFDataRef)[vCardString dataUsingEncoding:NSUTF8StringEncoding]; ABAddressBookRef book = ABAddressBookCreate(); ABRecordRef defaultSource = ABAddressBookCopyDefaultSource(book); CFArrayRef vCardPeople = ABPersonCreatePeopleInSourceWithVCardRepresentation(defaultSource, vCardData); for (CFIndex index = 0; index < CFArrayGetCount(vCardPeople); index++) { ABRecordRef person = CFArrayGetValueAtIndex(vCardPeople, index); ABAddressBookAddRecord(book, person, NULL); CFRelease(person); } CFRelease(vCardPeople); CFRelease(defaultSource); ABAddressBookSave(book, NULL); CFRelease(book); } I have searched a bit and fixed the code and here how it looks like it doesn t crash anymore but it doesn t save the VCard (NSString format) in the address book , any clues ?

    Read the article

  • CodePlex Daily Summary for Sunday, March 14, 2010

    CodePlex Daily Summary for Sunday, March 14, 2010New ProjectsBeerMath.net: BeerMath.net lets brewers calculate expected values for their recipes. Written entirely in C#, it can be used in any .Net language.Bible Study: Данный проект предусматривает создание программного обеспечения, предоставляющего пользователю гибкие и мощные инструменты для чтения и изучения Пи...E-Messenger: Description détaillé du sujet : Développement d'une application (client lourd) de messagerie instantané et de partage de fichier interne à ESPRIT....Facebook Azure Toolkit: The Facebook Azure Toolit provides a flexible and scalable hosting platform for the smallest and largest of Facebook applications. This toolkit hel...Gherkin editor: A simple text editor to write specifications using Gherkin. The editor supports code completion, syntax highlighting, spell checker and more.Mydra Center: Mydra Center is a Media center with the particularity to be very flexible, allowing developers to extend it and add new features. The philosophy be...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits is a free Twitter client for Windows XP/Vista/7 powered by WPF which gives you freedom to twit right from your desktop. You can do almost a...na laborke: aaaaaaaaaaaaaaasssssssssssssssddddddddddddddddddddfffffffffffffNMTools: The "Network Management Tools" (NMTools) complete OpenSLIM CMDB capa­bil­i­ties with Network Discovery, Automa­tion and Con­fig­u­ra­tion Man­age­m...orionSRO: This project aims to make a fully functional server.Project Naduvar: Project Naduvar, is a centralized Locking Service in distribute systems. You can use this service in any of your existing distributed application. ...Silverlight Input Keyboard: Silverlight Input Keyboard and Behaviorsuh: uh.py is a command line tool that helps developers porting native projects from a case-insensitive filesystem to a case-sensitive filesystem by sea...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.3: Small changesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.4: Small updatesAStyle AddIn for SharpDevelop (Alex): 2.0 Production: #D 3.* add in with updated GUI elements.Coding Cockerel code samples: Validation with ASP .NET MVC and jQuery: Code sample related to the following blog post, http://codingcockerel.co.uk/consistent-validation-with-asp-net-mvc-and-jquery/.CoreSystem Library: Release - 1.0.3725.10575: This release contains a new class Crypto which makes encryption and descryption of string easy, it uses TripleDESCrystal Mapper: Release - 2.0.3725.11614: This is preview if release 2.0* that I promised, it contains following new features Tracking dirty entities and provide Save function to save all ...Digital Media Processing Project 1: Image Processor: Image Processor Alpha: First Release Features Include: Curve Adjustment Tool Region Growing Segmetation Threshold Segmentation Guassian/Butterworth High/Low pass filter...Exepack.NET: Exepack.NET version 0.03 beta: Exepack.NET is executable file compressor for .NET Framework. It allows to package your .NET application consisting of an executable file and sever...Export code as Code Snippet - Addin for Visual Studio 2008/2010 RC: VS 2010 Release Candidate: This release targets Visual Studio 2010 Release Candidate. It includes full Visual Basic 2010 source code. Fixes already available in previous ve...Facebook Azure Toolkit: 0.9 Beta: This is the initial beta releaseFamily Tree Analyzer: Version 1.0.5.0: Version 1.0.5.0 Change the way Census & Individual reports columns are sized so that user can resize later. Add filter to exclude individuals over...Home Access Plus+: v3.1.2.1: Version 3.1.2.1 Release Change Log: Added SSL SMTP Added SSL Authentication File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/we...Home Access Plus+: v3.1.3.1: Version 3.1.3.1 Release Change Log: Fixed Help Desk File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/helpdesk/*.htmIceChat: IceChat 2009 Alpha 11.6 Full Install: IceChat 2009 Alpha 11.6 - Full Installer, installs IceChat 2009, and the Emoticons, and will also download .Net Framework 2.0 if needed.IceChat: IceChat 2009 Alpha 11.6 Simple Binaries: This simply the IceChat2009.exe and the IPluginIceChat.dll needed to run IceChat 2009. Is not an installer, does not include emoticons.IceChat: IceChat 2009 Alpha 11.6 Source Code: IceChat 2009 Alpha 11.6 Source CodeLunar Phase Silverlight Gadget: Lunar Phase RC: Stable release. 6 languages Auto refresh. Name / Light problem fixedMiracle OS: Miracle OS Alpha 0.001: Our first release is the Alpha 0.001. Miracle OS doens't work at all, but we work on it. You to? Please help us.MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits BETA 1: I'm happy to release first BETA version of MyTwits. Just download the zip file attached and run setup.exe and you are done! If you've any problem...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits Source BETA 1: I'm providing you just a project file, I'll upload complete source code once I fine tuned the code.NMock3: NMock3 - Beta 5, .NET 3.5: Hilights of this releaseTutorials have been updated and are in a much better place now. (they compile) Public API is getting locked down. Void me...Project Naduvar: com.declum.naduvar.locking: First ReleaseQueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.01: This module is a proof of concept for both using AJAX in a DotNetNuke® module, and for using SQL in a module. »»» IMPORTANT NOTE ««« Using this mo...SCSI Interface for Multimedia and Block Devices: Release 10 - Almost like a commercial burner!!: I made many changes in the ISOBurn program in this version, making it much more user-friendly than before. You can now add, rename, and delete file...Silverlight Input Keyboard: Initial Release: For more information see http://www.orktane.com/Blog/post/2009/11/09/Virtual-Input-Keyboard-Behaviours-for-Silverlight.aspxThe Silverlight Hyper Video Player [http://slhvp.com]: RC: The release candidate is now in place. Unfortunately, because there are aspects of it that I'm not yet ready to discuss, the code for the RC will...twNowplaying: twNowplaying 1.0.0.3: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.uh: 1.0: This is the first stable release. It isn't super full featured but it does the basics.UriTree: UriTree 2.0.0: This release is the WPF version of this application.VCC: Latest build, v2.1.30313.0: Automatic drop of latest buildVr30 OS: Blackra1n: The software was made by blackra1n for jailbreak iphone and ipod touch. Is not the Vr30 OS Team ProjectVr30 OS: Vr30 Operating System Live Cd 1.0: The Operating system linux made by team. For more information go to http://vr30os.tuxfamily.orgWatchersNET.TagCloud: WatchersNET.TagCloud 01.01.00: Whats New Decide between Tags generated from the Search words, or create your own Tag List Custom Tag list changes Small BugfixesZeta Resource Editor: Source code release 2010-03-13: New sources, some small fixes.ZipStorer - A Pure C# Class to Store Files in Zip: ZipStorer 2.35: Improved UTF-8 Support Correct writting of modification time for extracted filesMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerFasterflect - A Fast and Simple Reflection APICaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web ServicesCalcium: A modular application toolset leveraging PrismFarseer Physics Engine

    Read the article

  • Setting Up GLFW3 in Visual Studio

    - by sm81095
    I decided a couple of days ago that I was going to start trying to develop games in C++ with OpenGL, instead of C# Monogame like I have been doing for a while. I was looking around for libraries to use, to make OpenGL a little easier to use. I settled on GLEW and GLFW. GLEW was a super easy copy/paste, but GLFW3 was not. After looking around for a while and fighting with CMake, I got the GLFW2.lib file created, and I added the additional include directories, library directories, and linked my program to the glfw3.lib file I just created. The problem is, I get these linker errors when I try to run or build my program: Error 1 error LNK2019: unresolved external symbol _glfwInit referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 2 error LNK2019: unresolved external symbol _glfwTerminate referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 3 error LNK2019: unresolved external symbol _glfwSetErrorCallback referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest and 10 other LNK2019 errors, all talking about some glfw method, as well as: Error 14 error LNK1120: 13 unresolved externals C:\Codex Interactive\Projects\OGLTest\Debug\OGLTest.exe 1 1 OGLTest at the very bottom of the error list. I've looked up most of these errors on their own, and the solutions that I find either do nothing to solve the problem, or are people commenting on how dumb people are for not being about to solve this linker problem. Any assistance to solve these errors would be greatly appreciated. Info: I built GLFW3 on Cmake for Visual Studio 11, 32 bit and 64 bit, and both threw the same errors. The only extra libraries I linked were opengl32.lib, glu32.lib, and glfw3.lib Here is the test code (from GLFW3's latest tutorial): Code

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • SQLAuthority News – SQL Server Cheat Sheet from MidnightDBA

    - by pinaldave
    When I read the article from MidnightDBA (I should say MidnightDBAs because it is about Jen and Sean) regarding T-SQL for the Absentminded DBA, my natural reaction was that it is a perfect extension. A year ago around the same month, I had created SQL Server Cheatsheet. I have distributed a lot of copies of it since I produced it. In fact, while attending TechMela in Nepal today, I am getting many requests to get copies of SQL Server Cheatsheet. When I checked my RSS feed, I realized that Jen and Sean have a perfect cheat sheet for intermediate level developers. I would like to suggest to all of you to read their post and download the Absentminded DBA’s Cheat Sheet for IntermediateTSQL. It is available in two formats: PDF and Docx. I just love how the members of the community help each other grow. I am fortunate that I have received excellent feedback/corrections and criticism on my blog posts for so many times. Criticism and corrections, after all, are absolutely needed and make a better community as a whole. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Cheat Sheet

    Read the article

  • In HLSL pixel shader , why is SV_POSITION different to other semantics?

    - by tina nyaa
    In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it? For example, I am using a triangle with the following coordinates: (0.0f, 0.5f) (0.5f, -0.5f) (-0.5f, -0.5f) The w and z values are 0 and 1, respectively. This is the pixel shader. struct VS_IN { float4 pos : POSITION; }; struct PS_IN { float4 pos : SV_POSITION; float4 k : LOLIMASEMANTIC; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = input.pos; output.k = input.pos; return output; } float4 PS( PS_IN input ) : SV_Target { // screenshot 1 return input.pos; // screenshot 2 return input.k; } technique10 Render { pass P0 { SetGeometryShader( 0 ); SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Screenshot 1: http://i.stack.imgur.com/rutGU.png Screenshot 2: http://i.stack.imgur.com/NStug.png (Sorry, I'm not allowed to post images until I have a lot of 'reputation') When I use the first statement (result is first screenshot), the one that uses the SV_POSITION semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Litespeed enable Access-Control-Allow-Origin

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance!

    Read the article

  • How to Install Mac OS X Lion on Your HP ProBook (or Compatible Laptop)

    - by Usman
    There’s nothing more satisfying than building a hackintosh, i.e. installing Mac OS X on a non-Apple machine. Although it isn’t as easy as it sounds, but the end result is worth the effort. Building a PC with specific components and installing Mac OS X on it can save you thousands of dollars you might spend on a real Mac. And now, it’s time to step into the portable world. Today we will show how you can turn an HP ProBook (or any compatible Sandy Bridge laptop) into a 95% MacBook Pro! Why should (or shouldn’t) you do it? Let’s clarify whether or not it should be done. Firstly, we all know Apple makes awesome laptops. The design, build quality, and the aesthetics (not to mention, the glowing Apple) would make you crave for one. Secondly, all these Apple laptops are bundled with Mac OS X, which (for some people) is the most user-friendly and annoyance-free operating system. Digital artists, musicians, video editors, they all prefer Mac for a reason. So the verdict is, if hardware design is what you really look for, you should get a real Mac, and we are not at all stopping you from doing so. But if you’re only concerned with the OS (and saving a few bucks in your pocket), you may consider giving this a shot. But remember, it may not perform as good as a real Mac does. The results vary, so hope for the best, and proceed with caution. Why HP ProBook? How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers

    Read the article

  • Messed up installation of mysql-server - cannot complete installation or deinstallation

    - by Christian Engel
    apt-get got stuck while installing mysql-server. I don't know why but it just stopped working and never continued. I had to reboot the machine in the middle of the setup process. Now, if I try to install or purge the mysql-server package, apt-get tries to configure mysql-server first (tells me its not installed before that) and cancels with a error message: Sub-process /usr/bin/dpkg returned an error code(1) apt-get also tells me that two packages have not been successfully installed or removed. this is the complete console output: christian@devbox:~$ sudo apt-get install mysql-server [sudo] password for christian: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.5 (5.5.32-0ubuntu7) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) christian@devbox:~$

    Read the article

  • 500 Metro Style WP7 Icons

    - by Bil Simser
    I was inspired by The Noun Project, a project that offers up “Metro-style” icons in SVG format. The project is licensed under a public domain license and while it’s a great project, all of the content is in SVG format. Jon Galloway has a great post (from 2007) talking about the differences between SVG and XAML so I highly recommend that for some background. I thought it would be helpful to the WPF/Windows Phone 7/Silverlight community to provide the content in alternative formats for use in your applications. The Goods I’ve put together a package of the 500 icons (502 actually) in PNG, XAML and the original SVG format along with a couple of sample projects so you can see them in action. There’s a WPF desktop app: And a Windows Phone 7 app: Building It To get all the content first I wrote up a quick program to suck the original SVG files. Luckily they’re all in a common path just named 1.SVG, 2.SVG, and so on. Easy sleazy to grab the contents. Once I had 500 SVG files I used the latest copy of XamlTune, an open source CodePlex project that has a command line conversion tool to convert the directory of SVG files into XAML (the tool also created a PNG file of each SVG so that’s just icing on the cake). Conversions The conversion from SVG to XAML isn’t 100%. While you can just drop the content into a WPF app, it doesn’t work that way for WP7. There are just some small adjustments I made to each format so you’ll have to do the same. Follow the information below or refer to the sample applications. As a sample, here’s an icon we want to use: Here’s the original SVG file: <svg version="1.0" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="100px" height="94.616px" viewBox="0 0 100 94.616" enable-background="new 0 0 100 94.616" xml:space="preserve"> <path d="M25.076,15.639c4.324,0.009,7.824-3.488,7.82-7.82C32.9,3.512,29.4,0.012,25.076,0c-4.313,0.012-7.814,3.512-7.821,7.819 C17.262,12.15,20.763,15.648,25.076,15.639L25.076,15.639z"/> <path d="M4.593,43.388h6.861l4.137-15.135h1.716L13.22,43.388h24.318l-4.389-15.135h1.817l2.32,7.415 c1.08,3.131,3.852,3.851,6.003,1.162l8.375-10.142c2.651-3.42-2.104-7.021-4.844-4.035l-4.993,5.952 c0.007,0.095-0.96-3.278-0.96-3.278c-1.135-3.978-4.918-7.903-10.595-7.922H19.576c-5.071,0.019-9.043,4.434-9.888,7.214 L4.593,43.388L4.593,43.388z"/> <polygon points="56.206,22.753 56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753 "/> <path d="M79.87,15.738c4.332-0.014,7.831-3.516,7.82-7.82c0.011-4.332-3.488-7.833-7.82-7.82c-4.306-0.013-7.806,3.488-7.821,7.82 C72.064,12.222,75.564,15.725,79.87,15.738L79.87,15.738z"/> <path d="M89.759,89.556v-43.19h5.751V22.804c0.007-3.079-2.757-5.448-6.71-5.449H70.436c-3.65,0.001-4.539,1.186-5.551,2.168 L49.597,37.889c-3.098,3.848,2.428,8.333,5.55,4.743L69.88,25.226v64.43c-0.019,6.475,9.06,6.686,9.081,0.201v-36.58h1.765v36.379 C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"/> <polygon points="100,54.035 100,45.155 0,45.155 0,54.035 100,54.035 "/> </svg> Here’s the XAML that XamlTune created. It can be used in any WPF app without any changes: <Canvas Name="Layer_1" Width="100" Height="94.616" ClipToBounds="True" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z" /> </Path.Data> </Path> </Canvas> The XAML works AS-IS in a WPF application but there are some changes I did to get it to work in a WP7 app. Here’s the modified XAML in a WP7 application: <Canvas Grid.Row="0" Grid.Column="0" Name="Icon_1" Width="100" Height="94.616"> <Path Fill="#FF000000" Data="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z"> </Path> <Path Fill="#FF000000" Data="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z"> </Path> <Path Fill="#FF000000" Data="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z"> </Path> <Path Fill="#FF000000" Data="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z"> </Path> <Path Fill="#FF000000" Data="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"> </Path> <Path Fill="#FF000000" Data="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z"> </Path> </Canvas> All I did was take the data portion and put it directly into a Data attribute on the Path. Note that while it does show up in the app (on the emulator or device) it wouldn’t show up in Visual Studio for me. Maybe some XAML guru out there can tell me why. You can just as easily use the PNG files in WP7 but if you want the crispness of vector graphics, go for the XAML version. Of course with XamlTune being open source you could always modify the output of that program to cater it to your app. If you do make a change that’s worthy please consider submitting a patch to the project so everyone can benefit. Hope this helps and happy programming! Resources and Links Sample Project and Icons XamlTune an open source project to convert SVG to XAML The Noun Project source of the original files Jon Galloways post on SVG and XAML StackOverflow question on converting SVG to XAML

    Read the article

  • The action can't be completed because the file is open in IIS Worker Process

    - by Bjørn Øyvind Halvorsen
    I have a windows server 2008 R2 running a couple of sites for me, while developing i usually post updates to this server. Now, i tried posting an update but it failed said it has a file in use and the error msg is as follows: "The action can't be completed because the file is open in IIS Worker Process" "Close the file and try again." I have tried stopping the site but this does not help. The site is running in the same application pool as the site that should stay online so a recycle might be breaking the running site (If this is wrong, please correct me). Question is: Is there a way to "Release" the lock that is put on the file without stopping other currently running sites?

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • SQLAuthority News – Download SQL Azure Labs Codename “Data Explorer” Client

    - by pinaldave
    Microsoft SQL Azure labs has recently released Data Explorer client. I was looking forward to visualizing tool for quite a while and I am delighted to see this tool. I will be trying out this tool in coming week and will post here my experience. I have listed few of the resources which are related to Data Explorer at the end. Please let me know if I have missed any and I will add the same. With “Data Explorer” you can: Identify the data you care about from the sources you work with (e.g. Excel spreadsheets, files, SQL Server databases). Discover relevant data and services via automatic recommendations from the Windows Azure Marketplace. Enrich your data by combining it and visualizing the results. Collaborate with your colleagues to refine the data. Publish the results to share them with others or power solutions. The Data Explorer Client package contains the Data Explorer workspace as well as an Office plugin that integrates Data Explorer into Excel. Resources: Download Data Explorer Data Explorer Blog Desktop Client Video of  Contoso Bikes and Frozen Yogurt (Data Explorer) Please note that this is not the final release of the product. Please do not attempt this on production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >