Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 55/2796 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Visual Studio .NET 2003 on Windows 7 hangs on search

    - by Nikhil
    So I have Visual Studio 2003 running on Windows 7 - yeah I am aware it isn't officially supported - and no, unfortunately I can't change that situation :-( For the most part it works OK but I have a specific problem, that I can't figure out. The application hangs if you do a project wide search (Ctrl - Shift - F) for a string. I have a reasonably powerful machine and all the other heavy tasks like compiling and debugging all work fine. It also works if I restrict the search to the current document (Ctrl - F). I am running it as administrator and VS.NET 2003 SP1 has been applied. The size of the project does not seem to be a problem since a colleague is also experiencing this issue for a single project solution containing 5 pages. I am currently using Windows Search as a work-around and I was wondering if there is something I missed that I should try. PS: I have asked this question on stack overflow as well - but I suspect this might be problem with Windows 7 OS - so I thought I'd cross post it here as well.

    Read the article

  • Visual Studio 2008/2010 Intellisense disable tab key

    - by Sean Edwards
    So I've been having problems with my left wrist and working on code for extended period of times, and I've pretty much narrowed the cause down to Intellisense autocomplete and the tab key. Ok, to be fair, I can't blame Intellisense, but constantly reaching over to hit that key is causing problems. I've discovered Enter does the exact same thing in that context, but that's not the key I instinctively reach for. Is it possible to outright disable the function of the tab key in intellisense, so I'm forced to use Enter instead (which I hit without contorting my wrist oddly.) Thanks. P.S. I do have Visual Assist, so if it's not possible in Visual Studio itself, can VAssistX help?

    Read the article

  • How to set up Aptana Studio 3 with a Bitbucket private repo

    - by Titus
    I have just started playing around with Git and would like to push a personal project to a newly created, private repo on Bitbucket using Aptana Studio 3. I tried to use the Git integration in Aptana but I couldn't figure out where to enter my username and password for Bitbucket anywhere. I tried using the Team > Share Project context menu but that keeps throwing the following message: Warning: Permanently added the RSA host key for IP address '207.223.240.181' to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly I'm pretty sure that's because my repo is private. However, I couldn't find a provision to provide any form of credentials for linking to a private repo. Any ideas?

    Read the article

  • Visual Studio 10 Disable Find Autocomplete

    - by trinithis
    When I misspell something in Visual Studio 10's find box (the one that shows in the editor's main window, not the popup window "Find and Replace" window), VS10 autocompletes to that misspelled word whenever I try fixing it. A Ctrl+Z will remove the entire autocompletion, including the last character I typed (which it can case change!). Backspacing and retyping will still autocomplete to the misspelled word. Only a cut and paste seems to work. Is there a way to disable this horrible feature!? This is a picture of the find box that I am talking about:

    Read the article

  • Publish database between two open database connections (Visual Studio 2005)

    - by danielswe
    I have two data locations, one to a local and one to a remote database. How do I copy the local database schema to the remote? The reason I don't use "Publish to provider" is that I'm not sure that I have all the information necessary to do so. I have the database name, server, username and pass but not "web service address" nor "web service password". I work in Visual Studio 2005. The server is a MSSQL 2005 server. I have tried using the queries but I only get errors doing so.

    Read the article

  • Visual Studio 2010 tool to order members inside #region

    - by Michael Swan
    There exist a tool for Visual studio which order alphabetically the members grouped by #region ? means #region A1() ... A2() ... A0() ... B1() ... Z1() ... B5() ... #endregion #region C1() ... C2() ... C0() ... D1() ... X1() ... Y5() ... #endregion With that tool, I want like: #region A0() ... A1() ... A2() ... B1() ... B5() ... Z1() ... #endregion #region C0() ... C1() ... C2() ... D1() ... X1() ... Y5() ... #endregion Thank you

    Read the article

  • Quick Replace in Visual Studio 2010 fails to use Tagged Expression n

    - by slomojo
    I'm trying to do some basic regex Quick Replace operations in Visual Studio 2010, but when I use regex grouping I don't get Tagged Expressions (ie. \1 \2 etc.) returning their values, instead they are blank. For example: Text int a = int.Parse("10"); int b = int.Parse("20"); int c = int.Parse("30"); Search Pattern (regex enabled) int\.Parse\("([0-9]*)"\); Replace \1; Replaced Text int a = ; int b = ; int c = ;

    Read the article

  • Visual Studio 2012 window border leaks onto other screen

    - by chrisstuart
    I have several 30" monitors and as a result, I use the Win+Left/Right to tile windows to the left and right side of each screen. I've noticed an annoying feature of Visual Studio 2012 is that it seems to "leak" onto the next screen. I can see a line on the adjacent screen as if the edge of the window is slightly over the border. Anyone else see this? Is it a bug? This is on Windows 7 64 bit. I've never seen this with any other application.

    Read the article

  • What does it mean for a computer to be an "IBM Compatible PC"?

    - by Jon
    A couple questions about this: 1) Is this term even relevant any more? 2) Does this mean anything from a developer's stand point? It is not exactly clear to me if this is a BIOS, architecture, bus or a combination. A piece of software I'm working on expects to see a "Description" of the system and currently windows machines report "AT/AT Compatible". Having been tasked to port this to Mac, I really don't know what a proper "Description" would be - this will most likely be omitted but I was wondering if anyone could provide some insight on the modern usage of this term.

    Read the article

  • Visual Studio 2012 intermittent lockup

    - by user1892678
    Visual Studio 2012 intermittently locks up on me. I notice that devenv.exe jumps to 50% CPU utilization. The CPU stays at this level for a few minutes and then drops. While its at 50% utilization I can still use the IDE. However, intermittently it stops responding (as though it was performing some sort of background process). It only lasts for a few seconds. Also this happens when debugging. I'm running under Windows 7 and I'm using Telerik controls. I've disabled add-ins and extensions and have had no success. Any ideas would be appreciated? Thanks

    Read the article

  • SL3/SL4 - Ado.Net Data Services Error during new DataServiceCollection<T>(queryResponse)

    - by Soulhuntre
    Hey all, I have two functions in a SL project (VS2010) that do almost exactly the same thing, yet one throws an error and the other does not. It seems to be related to the projections, but I am unsure about the best way to resolve. The function that works is... public void LoadAllChunksExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsChunk> data = null; DataServiceQuery<CmsChunk> theQuery = _dataservice .CmsChunks .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsChunk> query = asyncResult.AsyncState as DataServiceQuery<CmsChunk>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsChunk> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsChunk>; data = new DataServiceCollection<CmsChunk>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } This compiles and runs as expected. A very, very similar function (shown below) fails... public void LoadAllPagesExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsPage> data = null; DataServiceQuery<CmsPage> theQuery = _dataservice .CmsPages .Expand("CmsChildPages") .Expand("CmsParentPage") .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsPage> query = asyncResult.AsyncState as DataServiceQuery<CmsPage>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsPage> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsPage>; data = new DataServiceCollection<CmsPage>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } Clearly the issue is the Expand projections that involve a self referencing relationship (pages can contain other pages). This is under SL4 or SL3 using ADONETDataServices SL3 Update CTP3. I am open to any work around or pointers to goo information, a Google search for the error results in two hits, neither particularly helpful that I can decipher. The short error is "An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection." The full error is... System.Reflection.TargetInvocationException was caught Message=Exception has been thrown by the target of an invocation. StackTrace: at System.RuntimeMethodHandle.InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at System.Data.Services.Client.ClientType.ClientProperty.SetValue(Object instance, Object value, String propertyName, Boolean allowAdd) at System.Data.Services.Client.AtomMaterializer.ApplyItemsToCollection(AtomEntry entry, ClientProperty property, IEnumerable items, Uri nextLink, ProjectionPlan continuationPlan) at System.Data.Services.Client.AtomMaterializer.ApplyFeedToCollection(AtomEntry entry, ClientProperty property, AtomFeed feed, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.MaterializeResolvedEntry(AtomEntry entry, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.Materialize(AtomEntry entry, Type expectedEntryType, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.DirectMaterializePlan(AtomMaterializer materializer, AtomEntry entry, Type expectedEntryType) at System.Data.Services.Client.AtomMaterializerInvoker.DirectMaterializePlan(Object materializer, Object entry, Type expectedEntryType) at System.Data.Services.Client.ProjectionPlan.Run(AtomMaterializer materializer, AtomEntry entry, Type expectedType) at System.Data.Services.Client.AtomMaterializer.Read() at System.Data.Services.Client.MaterializeAtom.MoveNextInternal() at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Linq.Enumerable.d_b11.MoveNext() at System.Data.Services.Client.DataServiceCollection1.InternalLoadCollection(IEnumerable1 items) at System.Data.Services.Client.DataServiceCollection1.StartTracking(DataServiceContext context, IEnumerable1 items, String entitySet, Func2 entityChanged, Func2 collectionChanged) at System.Data.Services.Client.DataServiceCollection1..ctor(DataServiceContext context, IEnumerable1 items, TrackingMode trackingMode, String entitySetName, Func2 entityChangedCallback, Func2 collectionChangedCallback) at System.Data.Services.Client.DataServiceCollection1..ctor(IEnumerable1 items) at Phinli.Dashboard.Silverlight.Helpers.DataHelper.<>c__DisplayClass44.<>c__DisplayClass46.<LoadAllPagesExpandAll>b__43() InnerException: System.InvalidOperationException Message=An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection. StackTrace: at System.Data.Services.Client.DataServiceCollection1.InsertItem(Int32 index, T item) at System.Collections.ObjectModel.Collection`1.Add(T item) InnerException: Thanks for any help!

    Read the article

  • VS 2008 Service Pack 1 problem

    - by Compiler
    Hi, My OPS is XP and service pack 3 installed.I cant install vs2008 service pack1,In log file i see 'Visual C++ 2008 SP1 Design-Time Components for x86 - KB947888' cant be installed. Error code is 1603.Last part of Installation file is here. Returning IDOK. INSTALLMESSAGE_ERROR [Error 1335. The cabinet file 'patch.cab' required for this installation is corrupt and cannot be used. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.] [1/12/2009, 10:14:50] (IronSpigot::MsiExternalUiHandler::UiHandler) Returning IDOK. INSTALLMESSAGE_ACTIONSTART [Action 10:14:50: Rollback. Rolling back action:] [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) Patch (C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VS90sp1-KB945140-X86-ENU.msp; C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VC90sp1-KB947888-x86-enu.msp) install failed on product (Microsoft Visual Studio 2008 Professional Edition - ENU). Msi Log: Microsoft Visual Studio 2008 SP1_20090112_100005671-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) MsiApplyMultiplePatches returned 0x643

    Read the article

  • 3rd party data - Store in Data Warehouse or Primary database?

    - by brydgesk
    This is mostly a data warehouse philosophy question. My project involves an Oracle forms application, and a Teradata Data Warehouse for reporting and ad-hoc purposes. In addition to the primary data created by the users of our application, we also require data from various other sources. Currently, this 3rd party data comes via FTPd flat files directly to our Data Warehouse. To access the data, our users must use a series of custom BusinessObjects reports. My question is, would it make more sense for this data to be sent to our source Oracle system instead? Is it ever appropriate for a Data Warehouse to be the point of origin for users to access raw data? In short, is it more important that the operational database contain only the data created by your project, or that the data warehouse remain dedicated solely to reporting and analysis?

    Read the article

  • How can i add list data in my object?

    - by Phsika
    Below codes run perfectly but i want to re generate simply static void YeniMethodListele() { Calisan calisan = new Calisan(){ ID=1, Ad="yusuf", SoyAd="karatoprak"}; List<Calisan> myList = new List<Calisan>(); myList.Add(calisan); MyCalisan myCalisan = new MyCalisan() { list = myList }; //myCalisan.list.Add(calisan); foreach (Calisan item in myCalisan.list) { Console.WriteLine(item.Ad.ToString()); } } } public class Calisan { public int ID { get; set; } public string Ad { get; set; } public string SoyAd { get; set; } } public class MyCalisan { public List<Calisan> list { get; set; } public MyCalisan() { list = new List<Calisan>(); } } static void YeniMethodListele() { Calisan calisan = new Calisan(){ ID=1, Ad="yusuf", SoyAd="karatoprak"}; MyCalisan myCalisan = new MyCalisan(); myCalisan.list.Add(calisan); foreach (Calisan item in myCalisan.list) { Console.WriteLine(item.Ad.ToString()); } } } public class Calisan { public int ID { get; set; } public string Ad { get; set; } public string SoyAd { get; set; } } public class MyCalisan { public List<Calisan> list { get; set; } public MyCalisan() { list = new List<Calisan>(); } }

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • Starting to construct a data access layer. Things to consider?

    - by Phil
    Our organisation uses inline sql. We have been tasked with providing a suitable data access layer and are weighing up the pro's and cons of which way to go... Datasets ADO.net Linq Entity framework Subsonic Other? Some tutorials and articles I have been using for reference: http://www.asp.net/(S(pdfrohu0ajmwt445fanvj2r3))/learn/data-access/tutorial-01-vb.aspx http://www.simple-talk.com/dotnet/.net-framework/designing-a-data-access-layer-in-linq-to-sql/ http://msdn.microsoft.com/en-us/magazine/cc188750.aspx http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx http://www.subsonicproject.com/ I'm extremely torn, and finding it very difficult to make a decision on which way to go. Our site is a series of 2 internal portals and a public web site. We are using vs2008 sp1 and framework version 3.5. Please can you give me advise on what factors to consider and any pro's and cons you have faced with your data access layer. Thanks.

    Read the article

  • While try to open the project - getting error.

    - by Gopal
    Using Visual Studio 2008 I have .net file name like UPS.Tek, version - Visual Studio 2005 C# files are UPSReport.cs (Visual C# Source File) UPSConn.Cs (Visual C# Source File) UPSBase.cs (Visual C# Source File) ...., When i try to open the C# files individually in VS 2008, the C# files are opening But When I try to open this UPS.Tek in Visual Studio 2008, It showing error as "make sure the application for the project type .csproj is installed" How to solve this error.

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Waiting for a background operation to complete

    - by JohnHenry
    I have been suffering from the all too common 'Waiting for a background operation to complete...' message in Visual Studio 2012 (Professional) for a while now but it has been fairly sporadic. Lately though, I am really struggling to use Visual Studio as pretty much whenever i try and do anything with any Razor views (mostly clicking to move the cursor) visual studio hangs and the above message appears for about a minute at a time. (If when its finished doing stuff i then click in the view again, the process repeats, and repeats, and repeats.....) I have searched high and low, and read loads of articles regarding this and peoples suggestions and tried changing indentation settings, resetting settings, etc but none have worked. Has anyone come across something else that may work as this is seriously impeding my ability to use visual studio and sadly provoking much cursing.

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • Used HDD/ran DiskSmartView/40,000 Power-on-hours?? should i trust it w/ my data, or take it back and bitch?

    - by David Lindsay
    I just bought a used hard drive from a University Surplus Store. Decided to run DiskSmartView to make sure it wasn't ready to fail. 40,000 power-on-hours I don't know if I feel like trusting my data to something that used. I really dont know if thats unreasonably old, but when i compare it to the POH reading i get when testing my other hdds its more than 3x older (my others have 2110 hours, 6150 hours, etc.. It's a Western Digital, so that gives me a little bit of hope(WDC WD4000KD-00NAB0). I could sure use someone else's opinion here. Thanks, DAVE

    Read the article

  • Dark Visual Experience in Visual Studio 2012

    - by Jalpesh P. Vadgama
    I have written whole series related to Visual Studio 2012 features and this post will also be part of same series.You can get all my post related to visual studio from the following link. Visual Studio 2012 feature series Before some days I was searching something and found a great way to change the visual experience of visual studio 2012. I found that there are two type of themes available in visual studio 2012 light and dark under Tools->Option-> General environment value. This is one of newest feature I have found in visual studio 2012. Read More >>

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Favorite Visual Studio 2010 Extensions

    - by Scott Dorman
    Now that Visual Studio 2010 has been released, there are a lot of extensions being written. In fact, as of today (May 1, 2010 at 15:40 UTC) there are 809 results for Visual Studio 2010 in the Visual Studio Gallery. If you filter this list to show just the free items, there are still 251 extensions available. Given that number (and it is currently increasing weekly) it can be difficult to find extensions that are useful. Here is the list of extensions that I currently have installed and find useful: Word Wrap with Auto-Indent Indentation Matcher Extension Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl Source Outliner Triple Click ItalicComments Go To Definition Spell Checker Remove and Sort Using Format Document Open Folder in Windows Explorer Find Results Highlighter Regular Expressions Margin Indention Matcher Extension Word Wrap with Auto-Indent VSCommands HelpViewerKeywordIndex StyleCop Visual Studio Color Theme Editor PowerCommands for Visual Studio 2010 Extension Analyzer CodeCompare Team Founder Server Power Tools VS10x Selection Popup Color Picker Completion Numbered Bookmarks   Technorati Tags: Visual Studio,Extensions

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >