Search Results

Search found 26384 results on 1056 pages for 'visual studio 2008 sp1'.

Page 320/1056 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • changing ASP.NET tag formatting

    - by Steven
    When I drag a Label control to my document, I get the following code : <asp:Label ID="Label1" runat="server" text="Label"></asp:Label> I prefer my code to look like the following instead : <asp:Label ID="Label1" runat="server" Text="Label" /> How can I get .NET to do this by default? I looked in Tools - Options - Text Editor, where you'd expect to find it, but I couldn't find anything relevant there.

    Read the article

  • Why does Sql Server recommends creating an index when it already exist?

    - by Pierre-Alain Vigeant
    I ran a very basic query against one of our table and I noticed that the execution plan query processor is recommending that we create an index on a column The query is SELECT SUM(DATALENGTH(Data)) FROM Item WHERE Namespace = 'http://some_url/some_namespace/' After running, I get the following message // The Query Processor estimates that implementing the following index could improve the query cost by 96.7211%. CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[Item] ([Namespace]) My problem is that I already have such index on that column: CREATE NONCLUSTERED INDEX [IX_ItemNamespace] ON [dbo].[Item] ( [Namespace] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Why is Sql Server recommending me to create such index when it already exist?

    Read the article

  • VS2008 - Find and Replace - Searches too many files.

    - by Pam Bullock
    I've used VS2008 a lot and have never had this problem. However, I started a new job and am using a new machine. Ever since I've gotten here the VS Find feature has been acting funny. I first noticed it when I did a replace all for "All Open Files". The project wouldn't build because the values had actually been replaced in other files within the solution that were not open and didn't even open after I pressed replace all. I have found that I can never use replace all on this machine because I never know what it is going to do. Even if I just do a find on "Current Document", once it's done with the document and I should get that message that says "No more matches found" it actually OPENS another random file from my solution where there is a match and keeps on going. It seems to never make any difference what "Look in" option I've chosen. My coworker has an install off the same disk and claims to not be experiencing this. We're in the middle of a stressful, huge project with a close deadline so I know my boss won't let me do a reinstall. Has anyone else ever had this happen? Anyone know a fix?? Thanks, Pam

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • Where clause in joins vs Where clause in Sub Query

    - by Kanavi
    DDL create table t ( id int Identity(1,1), nam varchar(100) ) create table t1 ( id int Identity(1,1), nam varchar(100) ) DML Insert into t( nam)values( 'a') Insert into t( nam)values( 'b') Insert into t( nam)values( 'c') Insert into t( nam)values( 'd') Insert into t( nam)values( 'e') Insert into t( nam)values( 'f') Insert into t1( nam)values( 'aa') Insert into t1( nam)values( 'bb') Insert into t1( nam)values( 'cc') Insert into t1( nam)values( 'dd') Insert into t1( nam)values( 'ee') Insert into t1( nam)values( 'ff') Query - 1 Select t.*, t1.* From t t Inner join t1 t1 on t.id = t1.id Where t.id = 1 Query 1 SQL profiler Result Reads = 56, Duration = 4 Query - 2 Select T1.*, K.* from ( Select id, nam from t Where id = 1 )K Inner Join t1 T1 on T1.id = K.id Query 2 SQL Profiler Results Reads = 262 and Duration = 2 You can also see my SQlFiddle Query - Which query should be used and why?

    Read the article

  • SSRS Tablix Header Row Formatting

    - by tnriverfish
    I've got several reports and they have been built with various formatting. Nothing huge just the header row is different between them. I'd like to pick a standard and just update the reports so they all look the same. This can be done on a textbox by textbox basis - setting the font, font color, font size and background color. It seems like I should be able to select more than one textbox and set the formatting on them all at once but the "textbox properties" item is disabled when I've selected more than one. Any thoughts?

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • How can I get VS2008 winforms designer to render a Form that implements an abstract base class

    - by BeowulfOF
    Hi, I engadged a problem with inherited Controls in WinForms, and need some advice on it. I do use a base class for items in a List (selfmade GUI list made of a panel) and some inherited controls that are for each type of data that could be added to the list. There was no problem with it, but I know found out, that it would be right, to make the base-control an abstract class, since it has methods, that need to be implemented in all inherited controls, called from the code inside the base-control, but must not and can not be implemented in the base class. When I mark the base-control as abstract, the VS2008 Designer refuses to load the window. Is there any way to get the Designer work with the base-control made abstract?

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • How do I mix functions in complex SSRS expressions?

    - by Boydski
    I'm writing a report against a data repository that has null values within some of the columns. The problem is building expressions is as temperamental as a hormonal old lady and doesn't like my mixing of functions. Here's an expression I've written that does not work if the data in the field is null/nothing: =IIF( IsNumeric(Fields!ADataField.Value), RunningValue( IIF( DatePart("q", Fields!CreatedOn.Value) = "2", Fields!ADataField.Value, 0 ), Sum, Nothing ), Sum(0) ) (Pseudocode) "If the data is valid and if the data was created in the second quarter of the year, add it to the overall Sum, otherwise, add zero to the sum." Looks pretty straight forward. And the individual pieces of the expression work by themselves. IE: IsNumeric(), DatePart(), etc. But when I put them all together, the expression throws an error. I've attempted about every permutation of what's shown above, all to no avail. Null values in Fields!ADataField.Value cause errors. Thoughts?

    Read the article

  • value of Identity column returning null when retrieved by value in dataGridView

    - by Raven Dreamer
    Greetings. I'm working on a windows forms application that interacts with a previously created SQL database. Specifically, I'm working on implementing an "UPDATE" query. for (int i = 0; i < dataGridView1.RowCount; i++) { string firstName = (string)dataGridView1.Rows[i].Cells[0].Value; string lastName = (string)dataGridView1.Rows[i].Cells[1].Value; string phoneNo = (string)dataGridView1.Rows[i].Cells[2].Value; short idVal = (short)dataGridView1.Rows[i].Cells[3].Value; this.contactInfoTableAdapter.UpdateQuery(firstName, lastName, phoneNo, idVal); } The dataGridView has 4 columns, First Name, Last Name, Phone Number, and ID (which was created as an identity column when I initially formed the table in SQL). When I try to run this code, the three strings are returned properly, but dataGridView1.Rows[i].Cells[3].Value is returning "null". I'm guessing this is because it was created as an identity column rather than a normal column. What's a better way to retrieve the relevant ID value for my UpdateQuery?

    Read the article

  • Weird behavior of std::vector

    - by Nima
    I have a class like this: class OBJ{...}; class A { public: vector<OBJ> v; A(int SZ){v.clear(); v.reserve(SZ);} }; A *a = new A(123); OBJ something; a->v.push_back(something); This is a simplified version of my code. The problem is in debug mode it works perfect. But in release mode it crashes at "push_back" line. (with all optimization flags OFF) I debugged it in release mode and the problem is in the constructor of A. the size of the vector is something really big with dummy values and when I clear it, it doesn't change... Do you know why? Thanks,

    Read the article

  • Can you create a trigger on a field within a table?

    - by chris
    Is it possible to create a trigger on a field within a table being updated? So if I have: TableA Field1 Field2 .... I want to update a certain value when Field1 is changed. In this instance, I want to update Field2 when Field1 is updated, but don't want to have that change cause another trigger invocation, etc...

    Read the article

  • Repeatedly execute a stored procedure

    - by manivineet
    I have a situation where I need to repeatedly execute a stored procedure Now this procedure (spMAIN) has a cursor inside which looks for a value from a table as T1,which has the following structure ID Status ---- -------- 1 New 2 New 3 success 4 Error now the cursor looks for all rows with a status of 'New' Now while processing , if that instance of the cursor encounters an error, another SP say spError needs to be called, the 'Status' column in T1 needs to be updated to 'Error' and spMAIN needs to be called again which again repeats the process, looking for rows with 'new' how do I do it? Also, also, while we are at it, what if an SP has other SPs inside it and if any of those SP raises an error, same thing needs to be done, the T1 table needs to be updated ('Error') and spMAIN needs to be called again. can you also recommend something ? here's some code ALTER PROC zzSpMain AS BEGIN DECLARE @id INT BEGIN TRY IF EXISTS ( SELECT * FROM dbo.zzTest WHERE istatus = 'new' ) BEGIN DECLARE c CURSOR FOR SELECT id FROM zztest WHERE istatus = 'new' OPEN c FETCH NEXT FROM c INTO @id WHILE @@FETCH_STATUS = 0 BEGIN PRINT @id IF @id = 2 BEGIN UPDATE zztest SET istatus = 'error' WHERE id = @id RAISERROR ( 'Error occured', 16, 1 ) END UPDATE zztest SET istatus = 'processed' WHERE id = @id FETCH NEXT FROM c INTO @id END CLOSE c DEALLOCATE c END END TRY begin CATCH EXEC zzSpError END CATCH END

    Read the article

  • Debugging a release version of a DLL (with PDB file)

    - by Martin
    If I have a DLL (that was built in release-mode) and the corresponding PDB file, is it possible to debug (step-into) classes/methods contained in that DLL? If so, what are the required steps/configuration (e.g. where to put the PDB file)? Edit: If have the PDB file in the same place as the DLL (in the bin/debug directory of a simple console test application). I can see that the symbols for the DLL are loaded (in the Output window and also in the Modules window), but still I cannot step into the methods of that DLL. Could this be the result of compiler optimizations (as described by Michael in his answer)?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Tame this format with a cross tab ?

    - by Damien Joe
    I have result of query in form EmpId Profit OrderID CompanyName ------ ------ ------- -------------- 1 500 $ 1 Acme Company 1 200 $ 1 Evolve Corp. 2 400 $ 1 Acme Company 2 100 $ 1 Evolve Corp. 3 500 $ 1 Acme Company 3 500 $ 1 Evolve Corp. Now the desired report format is EmpId OrderId Acme's Profit Evolve's Profit ----- ------ ------------- --------------- 1 1 700 $ 700 $ 2 1 500 $ 500 $ 3 3 1000 $ 1000 $ I tried hard at the crosstab but I'm unable to figure out how to group the records. I tried moving CompanyName in CrossTab columns and moved EmpId in rows & tried a cross tab group but results are not as expected. My questions are 1) Is this format achievable with a cross tab ? 2) How do I group record's by EmpId's in my crosstab in such a way that the Companies are moved horizontally ?

    Read the article

  • Cannot Debug Unmanaged Dll from C#

    - by JustSmith
    I have a DLL that was written in C++ and called from a C# application. The DLL is unmanaged code. If I copy the DLL and its .pdb files with a post build event to the C# app's debug execution dir I still can't hit any break points I put into the DLL code. The break point has a message attached to it saying that "no symbols have been loaded for this document". What else do I have to do to get the debugging in the dll source? I have "Tools-Options-Debugging-General-Enable only my code" Disabled. The DLL is being compiled with "Runtime tracking and disable optimizations (/ASSEMBLYDEBUG)" and Generate Debug Info to "Yes (/DEBUG)"

    Read the article

  • How to register System.DirectoryServices for use in SQL CLR User Functions?

    - by Saul Dolgin
    I am porting an old 32-bit COM component that was written in VB6 for the purpose of reading and writing to an Active Directory server. The new solution will be in C# and will use SQL CLR user functions. The assembly that I am trying to deploy to SQL Server contains a reference to System.DirectoryServices. The project does compile without any errors but I am unable to deploy the assembly to the SQL Server because of the following error: Error: Assembly 'system.directoryservices, version=2.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a.' was not found in the SQL catalog. What are the correct steps for registering System.DirectoryServices on SQL Server?

    Read the article

  • SQL SERVER FULL-TEXT INDEX, CONTAINS return empty

    - by max
    Hi, All: I got a issue about full index, any body can help me on this? set up full text index CREATE FULLTEXT INDEX ON dbo.Companies(my table name) ( CompanyName(colum of my table) Language 0X0 ) KEY INDEX IX_Companies_CompanyAlias ON QuestionsDB WITH CHANGE_TRACKING AUTO GO Using CONTAINS to find the matched rows SELECT CompanyId, CompanyName FROM dbo.Companies WHERE CONTAINS(CompanyName,'Micro') All is going well. just just just return empty resultset. And I am sure there is company with CompanyName "Microsoft" in Table Company Much appreciated if anybody does me a favor on this.

    Read the article

  • what the true nature of @ in Transct-SQL

    - by Richard77
    Hello, I reading some old ScottGu's blogs on Linq2SQL. Now I'm doing the SPROC part. I'd like to know what's the exact meaning of @variable. See this from ScottGu's Blog ALTER PROCEDURE dbo.GetCustomersDetails ( @customerID nchar(5), @companyName nvarchar(40) output ) AS SELECT @companyName = CompanyName FROM Customers WHERE CustomerID = @customerID SELECT * FROM Orders WHERE CustomerID = @customerID ORDER BY OrderID I'm kind of lost as, so far, I've though of anything preceded by a '@' as a placeholder for user input. But, in the example above, it looks like '@companyName' is used as a regular variable like in C# for instance (SELECT @companyName = ...). But, @companyName is not known yet. So, what the true nature a something preceded by a '@' like above? a vriable? a simple placeholder to accommodate user entered value? Thanks for helping

    Read the article

  • R6034: An application has made an attempt to load the C runtime library incorrectly.

    - by mattias
    I am getting this R6034 error when running program that I just updated (and cleaned) from VS2003 - VS2008. To be more exact: "R6034: An application has made an attempt to load the C runtime library incorrectly." It seems to happed almost at the same place all the time when running. I have no really idea why but I tried some suggestions I found when googleing this. For example adding the msvc dlls, but that didn't work. Any help on why this error occurs would be great. Thanks

    Read the article

  • SQL2k8 T-SQL: Output into XML file

    - by Nai
    I have two tables Table Name: Graph UID1 UID2 ----------- 12 23 12 32 41 51 32 41 Table Name: Profiles NodeID UID Name ----------------- 1 12 Robs 2 23 Jones 3 32 Lim 4 41 Teo 5 51 Zacks I want to get an xml file like this: <graph directed="0"> <node id="1"> <att name="UID" value="12"/> <att name="Name" value="Robs"/> </node> <node id="2"> <att name="UID" value="23"/> <att name="Name" value="Jones"/> </node> <node id="3"> <att name="UID" value="32"/> <att name="Name" value="Lim"/> </node> <node id="4"> <att name="UID" value="41"/> <att name="Name" value="Teo"/> </node> <node id="5"> <att name="UID" value="51"/> <att name="Name" value="Zacks"/> </node> <edge source="12" target="23" /> <edge source="12" target="32" /> <edge source="41" target="51" /> <edge source="32" target="41" /> </graph> Thanks very much!

    Read the article

  • WindowsForms difference to simple Console App

    - by daemonfire300
    I currently started to "port" my console projects to WinForms, but it seems I am badly failing doing that. I am simply used to a console structure: I got my classes interacting with each other depending on the input coming from the console. A simple flow: Input -> ProcessInput -> Execute -> Output -> wait for input Now I got this big Form1.cs (etc.) and the "Application.Run(Form1);" But I really got no clue how my classes can interact with the form and create a flow like I described above. I mean, I just have these "...._Click(object sender....)" for each "item" inside the form. Now I do not know where to place / start my flow / loop, and how my classes can interact with the form.

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >