Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 18/2796 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Setting up Subsonic 3.0 on a ASP.NET web application on Visual Studio 2008

    - by Rob Paul
    I followed the steps on this tutorial video by at subsonic website. Everything seems to be self explanatory but when I copy the .tt files into my Visual Studio nothing happens. I have read other question relating to this problem on this website but they don't seem to fix this problem. I also went into the regedit to find out the generator key for .tt files and it is properly set. This is not a visual studio express or anything but a professional edition also I am trying to build a ASP.net web application and not a MVC application like the demo. Any help will be greatly appreciated. Thank You

    Read the article

  • Visual Studio debugger problem

    - by Alexandre Pepin
    In Visual Studio 2008, after debugging about 1-2 minutes, when I press F10 (Step Over), the debugger hangs and Visual Studio freezes for 5-10 seconds and then go to the next line. Then whatever I do (F10, F5, F11, etc), the debugger continues the execution as if i pressed F5 and all my forms that I was debugging close. I always have to restart the application. It is very hard to reproduce and it does not occurs every time I want to debug something. Does anyone has a solution ?

    Read the article

  • Can I use Visual Studio 2010's C++ compiler with Visual Studio 2008's C++ Runtime Library?

    - by BillyONeal
    I have an application that needs to operate on Windows 2000. I'd also like to use Visual Studio 2010 (mainly because of the change in the definition of the auto keyword). However, I'm in a bit of a bind because I need the app to be able to operate on older OS's, namely: Windows 2000 Windows XP RTM Windows XP SP1 Visual Studio 2010's runtime library depends on the EncodePointer / DecodePointer API which was introduced in Windows XP SP2. If using the alternate runtime library is possible, will this break code that relies on C++0x features added in VS2010, like std::regex?

    Read the article

  • Manipulating the case of the build macros in Visual Studio: $(TargetDir)

    - by miked
    I've come across a weird problem today in Visual Studio 2005. If I create a new configuration "NewConfiguration" for one of my projects. The output directory referred to by the project in $(TargetDir) is "/path/to/build/area/newconfiguration". Note the loss of capital letters, I'd expect it to be in "/path/to/build/area/NewConfiguration". In another project that I created yesterday, the capital letters are there. Normally this would be a problem, but it's part of a fairly complicated build system where some of it's on unix and we need to worry about case sensitive filename. Does anyone know where the source string for the Visual Studio macros like $(TargetDir) are stored so that I can get the case to be consistent and match what we need for our build system?

    Read the article

  • Cleaning a dataset of song data - what sort of problem is this?

    - by Rob Lourens
    I have a set of data about songs. Each entry is a line of text which includes the artist name, song title, and some extra text. Some entries are only "extra text". My goal is to resolve as many of these as possible to songs on Spotify using their web API. My strategy so far has been to search for the entry via the API - if there are no results, apply a transformation such as "remove all text between ( )" and search again. I have a list of heuristics and I've had reasonable success with this but as the code gets more and more convoluted I keep thinking there must be a more generic and consistent way. I don't know where to look - any suggestions for what to try, topics to study, buzzwords to google?

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • The Business case for Big Data

    - by jasonw
    The Business Case for Big Data Part 1 What's the Big Deal Okay, so a new buzz word is emerging. It's gone beyond just a buzzword now, and I think it is going to change the landscape of retail, financial services, healthcare....everything. Let me spend a moment to talk about what i'm going to talk about. Massive amounts of data are being collected every second, more than ever imaginable, and the size of this data is more than can be practically managed by today’s current strategies and technologies. There is a revolution at hand centering on this groundswell of data and it will change how we execute our businesses through greater efficiencies, new revenue discovery and even enable innovation. It is the revolution of Big Data. This is more than just a new buzzword is being tossed around technology circles.This blog series for Big Data will explain this new wave of technology and provide a roadmap for businesses to take advantage of this growing trend. Cases for Big Data There is a growing list of use cases for big data. We naturally think of Marketing as the low hanging fruit. Many projects look to analyze twitter feeds to find new ways to do marketing. I think of a great example from a TED speech that I recently saw on data visualization from Facebook from my masters studies at University of Virginia. We can see when the most likely time for breaks-ups occurs by looking at status changes and updates on users Walls. This is the intersection of Big Data, Analytics and traditional structured data. Ted Video Marketers can use this to sell more stuff. I really like the following piece on looking at twitter feeds to measure mood. The following company was bought by a hedge fund. They could predict how the S&P was going to do within three days at an 85% accuracy. Link to the article Here we see a convergence of predictive analytics and Big Data. So, we'll look at a lot of these business cases and start talking about what this means for the business. It's more than just finding ways to use Hadoop + NoSql and we'll talk about that too. How do I start in Big Data? That's what is coming next post.

    Read the article

  • Updates to Nino’s .hgignore files for Visual Studio

    - by PSteele
    As I move more of my repositories from SVN to Mercurial, I’m constantly referring to Nino’s sample .hgignore file he provided for Visual Studio developers.  I always start with his file but add a few more lines and thought I’d share them here.  Start with Nino’s .hgignore file and add the following two lines at the bottom: TestResults\* glob:desktop.ini Obviously, we don’t need to version our TestResults.  And I don’t want to version the occasional desktop.ini that gets generated by XP when you tweak folder settings. Technorati Tags: Mercurial,.hgignore,Visual Studio

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Visual Studio 2008 Solution Setup

    - by Ben Griswold
    In this screencast, Noah and I demonstrate preferred practices around .NET solution setup, naming conventions and version control.  I consider this an introductory video.  If you’ve been around the block, you might want to skip this episode but if you’re a .NET/Visual Studio newbie, it may be worth a look.    YouTube - Visual Studio 2008 Solution Setup   This is one of our first screencasts.  Actually it is the very first.  If you have feedback, I’d love to hear it.

    Read the article

  • Querying Visual Studio project files using T-SQL and Powershell

    - by jamiet
    Earlier today I had a need to get some information out of a Visual Studio project file and in this blog post I’m going to share a couple of ways of going about that because I’m pretty sure I won’t be the only person that ever wants to do this. The specific problem I was trying to solve was finding out how many objects in my database project (i.e. in my .dbproj file) had any warnings suppressed but the techniques discussed below will work pretty well for any Visual Studio project file because every such file is simply an XML document, hence it can be queried by anything that can query XML documents. Ever heard the phrase “when all you’ve got is hammer everything looks like a nail”? Well that’s me with querying stuff – if I can write SQL then I’m writing SQL. Here’s a little noddy database project I put together for demo purposes: Two views and a stored procedure, nothing fancy. I suppressed warnings for [View1] & [Procedure1] and hence the pertinent part my project file looks like this:   <ItemGroup>    <Build Include="Schema Objects\Schemas\dbo\Views\View1.view.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151,3276</SuppressWarnings>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Views\View2.view.sql">      <SubType>Code</SubType>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Programmability\Stored Procedures\Procedure1.proc.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151</SuppressWarnings>    </Build>  </ItemGroup>  <ItemGroup> Note the <SuppressWarnings> elements – those are the bits of information that I am after. With a lot of help from folks on the SQL Server XML forum  I came up with the following query that nailed what I was after. It reads the contents of the .dbproj file into a variable of type XML and then shreds it using T-SQL’s XML data type methods: DECLARE @xml XML; SELECT @xml = CAST(pkgblob.BulkColumn AS XML) FROM   OPENROWSET(BULK 'C:\temp\QueryingProjectFileDemo\QueryingProjectFileDemo.dbproj' -- <-Change this path!                    ,single_blob) AS pkgblob                    ;WITH XMLNAMESPACES( 'http://schemas.microsoft.com/developer/msbuild/2003' AS ns) SELECT  REVERSE(SUBSTRING(REVERSE(ObjectPath),0,CHARINDEX('\',REVERSE(ObjectPath)))) AS [ObjectName]        ,[SuppressedWarnings] FROM   (        SELECT  build.query('.') AS [_node]        ,       build.value('ns:SuppressWarnings[1]','nvarchar(100)') AS [SuppressedWarnings]        ,       build.value('@Include','nvarchar(1000)') AS [ObjectPath]        FROM    @xml.nodes('//ns:Build[ns:SuppressWarnings]') AS R(build)        )q And here’s the output: And that’s it – an easy way of discovering which warnings have been suppressed and for which objects in your database projects. I won’t bother going over the code as it is fairly self-explanatory – peruse it at your leisure.   Once I had the SQL above I figured I’d share it around a little in case it was ever useful to anyone else; hence I’m writing this blog post and I also posted it on the Visual Studio Database Development Tools forum at FYI: Discover which objects have had warnings suppressed. Luckily Kevin Goode saw the thread and he posted a different solution to the same problem, one that uses Powershell. The advantage of Kevin’s Powershell approach is that it is easy to analyse many .dbproj files at the same time. Below is Kevin’s code which I have tweaked ever so slightly so that it produces the same results as my SQL script (I just want any object that had had a warning suppressed whereas Kevin was querying specifically for warning 4151):   cd 'C:\Temp\QueryingProjectFileDemo\' cls $projects = ls -r -i *.dbproj Foreach($project in $projects) { $xml = new-object System.Xml.XmlDocument $xml.set_PreserveWhiteSpace( $true ) $xml.Load($project) #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings=4151]/@Include"} #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[contains(e:SuppressWarnings,'4151')]/@Include"} $xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings]/@Include"} $ns = @{ e = "http://schemas.microsoft.com/developer/msbuild/2003" } $xml | Select-Xml -XPath $xpath.Start -Namespace $ns |Select -Expand Node | Select -expand Value } and here’s the output: Nice reusable Powershell and SQL scripts – not bad for an evening’s work. Thank you to Kevin for allowing me to share his code. Don’t forget that these techniques can easily be adapted to query any Visual Studio project file, they’re only XML documents after all! Doubtless many people out there already have code for doing this but nonetheless here is another offering to the great script library in the sky. Have fun! @Jamiet

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Finding Those Pesky Unicode Characters in Visual Studio

    - by fallen888
    Sometimes I’m handed HTML that I need to wire up and I find these characters.  Usually there are only a couple on the page and, while annoying to find, it’s not a big deal.  Recently I found dozens and dozens of these guys on a page and wasn’t very happy at the prospect of having to manually search them all out and remove/replace them.  That is, until I did some research and found this very  helpful article by Aaron Jensen - Finding Non-ASCII Characters with Visual Studio. Aaron’s wonderful solution: Try searching your code with the following regular expression: [^\x00-\x7f] Open any of Visual Studio’s find windows and enter the regular expression above into the “Find what:” text box. Click the “Find Options” plus sign to expand the list of options. Check the last box “Use:” and choose “Regular expressions” from the drop down menu. Easy and efficient.  Thanks, Aaron!

    Read the article

  • Using Visual Studio as a Task-Focused IDE

    - by Jay Stevens
    Are there patterns or libraries or any official Microsoft SDK for using Visual Studio as a specifically Task-Focused UI? For example, both Revolution R (IDE for the R language) and SQL 2012 (and I think SQL 2008 and possibly 2005) use Visual Studio as the underlying IDE framework. Is there an officially supported SDK and/or examples/samples for doing this type of thing? I am building a language Parser for an existing language - whose only available IDE is INSANELY expensive - using Irony (and eventually will generate a Language Service as well). Any direct or indirect suggestions/answers are appreciated.

    Read the article

  • Pair programming remotely with Visual Studio?

    - by shamp00
    What tools exist to facilitate pair programming with Visual Studio when the programmers are not in the same physical location? At the moment we are thinking voice (Skype?) plus remote desktop (VNC? TeamViewer?), but it would be good to know of other suggestions and experiences. Also, is there anything more integrated with Visual Studio? A bit more background: we are two experienced developers with who have collaborated well for a long time on a large mature project (ASP.NET, Windows Forms and SQL Server). However we are not usually working on the same part of the code base at the same time. We intend to spend some weeks doing substantial refactoring and it would be ideal if we were able to do this work with a pair-programming approach.

    Read the article

  • Are there compatibility issues opening Visual Studio Professional projects in Visual Studio Express, and vice versa? [migrated]

    - by theGreenCabbage
    Disclaimer: I have taken a look at the 50+ StackExchange forums to find the right place, and it seems /Programmers/ is the most suitable Exchange for this. If this is the wrong place to ask this, however, please let me know - I will personally delete the thread. I am in the process of downloading a single license for Visual Studio 2013 for my firm of 2-3 developers. One license is approximately $498.00 USD. As a small firm, our funds are short, but since we will be creating commercial software, we decided we will be needing the features of the Professional edition. At the same time, our decision is to use the Express edition for the rest of the two developers. My question is - will there be compatibility issues between Express projects and Professional projects for Visual Studio?

    Read the article

  • Visual Studio 2010 & .NET 4.0 RC in Feb-2010

    Scott says, In order to make sure that these fixes truly address the performance issues reported, and to Other Interested articles…27 New Features of .NET Framework 4.022 New Features of Visual Studio 2008 for .NET Professionals50 New Features of SQL Server 2008IIS 7.0 New featureshelp validate them across the broadest number of scenarios and machine configurations, we’ve decided to ship another public preview release of VS 2010 and .NET 4 before we ship. Specifically, we plan to make a Release Candidate build available in February that everyone will be able to download and test. It will be a public build and include a broad “go live” license that supports production deployment.The goal behind the Release Candidate is to get broad feedback on the readiness of the product. In order to ensure that we are able to receive and react to this feedback, we will also be moving the launch of Visual Studio 2010 and .NET 4 back a few weeks.Continue span.fullpost {display:none;}

    Read the article

  • MSM Merge Modules in Visual Studio 2013 [on hold]

    - by theGreenCabbage
    Could someone please let me know where I might find resources for creating MSM files? While I am able to create MSI files using InstallShield, it seems that Visual Studio no longer supports Merge Module Projects, judging by the link below and the screenshot of my version of Visual Studio 2013 - http://msdn.microsoft.com/en-us/library/z6z02ts5(v=vs.80).aspx To create a new merge module project: On the File menu, point to Add, then click New Project. In the resulting Add New Project dialog box, in the Project types pane, open the Other Project Types node and select Setup and Deployment Projects. In the Templates pane, choose Merge Module Project.

    Read the article

  • IBM Informix Spatial datablade LIneFromText function

    - by swatit
    Hi Everybody, I am using IBM-Informix for my school project as part of "Informix on-campus" ativity conduted by IBM. I am using spatial datablade to store spatial data. My spatial data table looks like, CREATE TABLE xmlTest (row_id integer NOT NULL,pre integer, post integer,parent integer,tagname varchar(40,1),point ST_POINT); Then I inserted the spatial data into the table. Now I am trying to select the 'Points' lying under given 'polygon', where the co-ordinates of the polygon are dynamic i.e co-ordinates will be decided in the 'select' query. My query is like SELECT v2.* FROM xmlTest v1,xmlTest v2 WHERE ST_Contains(ST_Polygon(ST_LineFromText('linestring (0 0, 22000 0,22000 22000,0 22000,0 0)',5)),ST_Point(v1.pre,v1.post,5)) AND v1.tagname like 'n1' AND ST_Contains(ST_Polygon(ST_LineFromText('linestring (0 0,v1.pre 0,v1.pre v1.post,0 v1.post,0 0 )',5)),ST_Point(v2.pre,v2.post,5)) AND v2.tagname like 'n2' however it is giving me error as "(USE31) - Too few points for geometry type in ST_LineFromText.", in the second linefromtext function. I checked the number of parameters for linestring function, but could not find the source of error. I think,linestring function's parameters should be fixed values and not the variable like in this query. Is it right? Then what is the alternate way, where I can specify my polygon co-ordinates dynamically? or is there any mistake in my query? I hope my question is clear. I appreciate your help!

    Read the article

  • How to access application.xml file of an EAR deployed to IBM WebSphere 6.1

    - by Matt1776
    I am deploying an EAR file to the IBM WebSpehre server 6.1 - I want to be able to access the EAR application name which is stored in the deployment file under 'display-name'. Looking through stack overflow posts on related subjects, I've been able to gather that this is possible via the Java MBean API - or IBM's WAS API - Problem is I cannot find a place where these API lists are summarized, i.e. cannot figure out which one to begin looking at. I could hardcode the WAS install location and find the file by looking in the 'installedApps' directory, but this is not dynamic. Does anyone have any experience working with these APIs? Any other way to dynamically find the deployed EAR's display name? EDIT - I should add that the reason I would like this information is to dynamically load our properties files - that are named by the following convention "EARAppName.properties" - so you see there IS a reasonable 'rationale' behind desiring this information in my application EDIT 2 - I should also note that this app will always be deployed on a WAS - but in the case that it isnt, a generic non-proprietary solution would be preferred, but not necessary at this moment. EDIT 3 - What I want to accomplish: Is there a way to dynamically find the deployed EAR's display name from within the application code?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >