Search Results

Search found 16890 results on 676 pages for '2008 archive'.

Page 268/676 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Visual Studio Question: When doing a compile/debug is VS suppose to delete existing files in the bin

    - by Greg
    Hi, Q1 - When doing a compile/debug is VS suppose to delete existing files in the bin\debug area? (for VS2008) if no then can I ask please: Q2 - My winforms checks for existance of a sqlite.db3 file and creates it if it needs to (programmatically). If the behavior I wanted was that each Compile/Debug I do is for the target Debug area to be clear, so that the program would exercise the code that builds the database file, how would I organise this? thanks

    Read the article

  • VS2008 Windows Form Designer does not like my control.

    - by Thedric Walker
    I have a control that is created like so: public partial class MYControl : MyControlBase { public string InnerText { get { return textBox1.Text; } set { textBox1.Text = value; } } public MYControl() { InitializeComponent(); } } partial class MYControl { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Component Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.textBox1 = new System.Windows.Forms.TextBox(); this.listBox1 = new System.Windows.Forms.ListBox(); this.label1 = new System.Windows.Forms.Label(); this.SuspendLayout(); // // textBox1 // this.textBox1.Location = new System.Drawing.Point(28, 61); this.textBox1.Name = "textBox1"; this.textBox1.Size = new System.Drawing.Size(100, 20); this.textBox1.TabIndex = 0; // // listBox1 // this.listBox1.FormattingEnabled = true; this.listBox1.Location = new System.Drawing.Point(7, 106); this.listBox1.Name = "listBox1"; this.listBox1.Size = new System.Drawing.Size(120, 95); this.listBox1.TabIndex = 1; // // label1 // this.label1.AutoSize = true; this.label1.Location = new System.Drawing.Point(91, 42); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(35, 13); this.label1.TabIndex = 2; this.label1.Text = "label1"; // // MYControl // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.Controls.Add(this.label1); this.Controls.Add(this.listBox1); this.Controls.Add(this.textBox1); this.Name = "MYControl"; this.Size = new System.Drawing.Size(135, 214); this.ResumeLayout(false); this.PerformLayout(); } #endregion private System.Windows.Forms.Label label1; } MyControlBase contains the definition for the ListBox and TextBox. Now when I try to view this control in the Form Designer it gives me these errors: The variable 'listBox1' is either undeclared or was never assigned. The variable 'textBox1' is either undeclared or was never assigned. This is obviously wrong as they are defined in MyControlBase with public access. Is there any way to massage Form Designer into allowing me to visually edit my control?

    Read the article

  • How to add generated files to multiple projects using CodeSmith tools

    - by Dugan
    I'm using CodeSmith on my current project and I'm trying to figure out an issue. For my CodeSmith project (.csp) I can select an option to have it automatically add all generated files to the current project (.csproj). But I want to be able to add the output to multiple projects (.csproj). Is there an option inside of CodeSmith to allow this? Or is there a good way to programmatically do that? Thanks.

    Read the article

  • VisualAssert Testing in C++, Loading a test fixture.

    - by C_Bevan
    Good day, I am learning Testing in Visual Studio C++ and I have several tutorials which I have followed. I am trying to load a test fixture. I have tried to put the test .cpp file in many different places but it will still not pick up on it when I click on "Run Tests" or "Run Tests without debugging" In the tutorials I found, they seemed to load into the Test Explorer automatically, but in mine is an icon with a X + (PROJECTNAME).EXE and when I hoover over it I get the process exited without registering with the agent... this is due to the model not containing any test fixtures... How can I load my tests into the Test Explorer...or register them with my project... I've tried right click and "Add Fixture...".... but that just starts a new test file and I have the same problem. Anybody know how I solve this issue?

    Read the article

  • nested insert exec work around

    - by stackoverflowuser
    i have 2 stored procedures usp_SP1 and usp_SP2. Both of them make use of insert into #tt exec sp_somesp. I wanted to created a 3rd stored procedure which will decide which stored proc to call. something like create proc usp_Decision ( @value int ) as begin if (@value = 1) exec usp_SP1 -- this proc already has insert into #tt exec usp_somestoredproc else exec usp_SP2 -- this proc too has insert into #tt exec usp_somestoredproc end Later realized I needed some structure defined for the return value from usp_Decision so that i can populate the SSRS dataset field. So here is what i tried: within usp_Decision created a temp table and tried to do "insert into #tt exec usp_SP1". Didn't work out. error "insert exec cannot be nested" within usp_Decision tried passing table variable to each of the stored proc and update the table within the stored procs and do "select * from ". That didnt work out as well. Table variable passed as parameter cannot be modified within the stored proc. Pls. suggest what can de done?

    Read the article

  • Thoughts on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amo

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in jQuery. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • Trouble upgrading to .NET 4 with VS2008. What am I missing?

    - by Matt H.
    I downloaded the .NET 4 framework from miscrosoft here: http://www.microsoft.com/downloads/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992&displaylang=en I installed and rebooted. When I go to compile options -- target framework... .NET 4 isn't on the list. Is .net 4 not compatible with VS2008? It would be nice if Microsoft stated that somewhere...

    Read the article

  • Adding Class instance as a new Row in DataGridView (c#)

    - by Amit Shah
    Hi All, I have a class say [Serializable] public class Answer { [DisplayName("ID")] public string ID { get; set; } [DisplayName("Value")] public string Value { get; set; } } and I have a datagridview with bounded columns to the above class. instances of this class Answer are created dynamically as and when required. How do I update datagridview when each and every instance of class is created. is it possible to do something of this sort. dataGridView.Rows.Add(classInstance); Thanks in Advance, Amit

    Read the article

  • Is there a way to turn this query to regular inline SQL

    - by Luke101
    I would like to turn this query to regular inline sql without using stored procedures declare @nod hierarchyid select @nod = DepartmentHierarchyNode from Organisation where DepartmentHierarchyNode = 0x6BDA select * from Organisation where @nod.IsDescendantOf(DepartmentHierarchyNode) = 1 Is there a way to do it?

    Read the article

  • Test multiple domains using ASP.NET development server

    - by Pete Lunenfeld
    I am developing a single web application that will dynamically change its content depending on which domain name is used to reach the site. Multiple domains will point to the same application. I wish to use the following code (or something close) to detect the domain name and perform the customizations: string theDomainName = Request.Url.Host; switch (theDomainName) { case "www.clientone.com": // do stuff break; case "www.clienttwo.com": // do other stuff break; } I would like to test the functionality of the above using the ASP.NET development server. I created mappings in the local HOSTS file to map www.clientone.com to 127.0.0.1, and www.clienttwo.com to 127.0.0.1. I then browse to the application with the browser using www.clinetone.com (etc). When I try to test this code using the ASP.net development server the URL always says localhost. It does NOT capture the host entered in the browser, only localhost. Is there a way to test the URL detection functionality using the development server? Thanks.

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • Alter Table Filestream runs even if IF statement is false

    - by Allison
    I am writing a script to update a database to add Filestream capability. The script needs to be able to be run multiple times without erroring. This is what I currently have IF ((select count(*) from sys.columns a inner join sys.objects b on a.object_id = b.object_id inner join sys.default_constraints c on c.parent_object_id = a.object_id where a.name = 'evidence_data' and b.name='evidence' and c.name='DF__evidence_evidence_data') = 0) begin ALTER TABLE evidence SET ( FILESTREAM_ON = AnalysisFSGroup ) ALTER TABLE evidence ALTER COLUMN id ADD ROWGUIDCOL; end GO The first time I run this against the database it works fine. The second time when the if statement should be false it throws an error saying "Cannot add FILESTREAM filegroup or partition scheme since table 'evidence' has a FILESTREAM filegroup or partition scheme already." If I put a simple select into the if statement and take out the alter table filestream on line it functions correctly and does not perform the if statement. So esentially it is always running the alter table filestream on statement even if the if statement is false. Any thoughts or suggestions would be great. Thanks.

    Read the article

  • Why does Sql Server recommends creating an index when it already exist?

    - by Pierre-Alain Vigeant
    I ran a very basic query against one of our table and I noticed that the execution plan query processor is recommending that we create an index on a column The query is SELECT SUM(DATALENGTH(Data)) FROM Item WHERE Namespace = 'http://some_url/some_namespace/' After running, I get the following message // The Query Processor estimates that implementing the following index could improve the query cost by 96.7211%. CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[Item] ([Namespace]) My problem is that I already have such index on that column: CREATE NONCLUSTERED INDEX [IX_ItemNamespace] ON [dbo].[Item] ( [Namespace] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Why is Sql Server recommending me to create such index when it already exist?

    Read the article

  • Determine if an Index has been used as a hint

    - by Joe Bloggs
    In SQL Server, there is the option to use query hints. eg SELECT c.ContactID FROM Person.Contact c WITH (INDEX(AK_Contact_rowguid)) I am in the process of getting rid of unused indexes and was wondering how I could go about determining if an index was used as a query hint. Does anyone have suggestions on how I could do this? Cheers, Joe

    Read the article

  • How to force the build to be out of date, when a text file is modified?

    - by demoncodemonkey
    The Scenario My project has a post-build phase set up to run a batch file, which reads a text file "version.txt". The batch file uses the information in version.txt to inject the DLL with a version block using this tool. The version.txt is included in my project to make it easy to modify. It looks a bit like this: @set #Description="TankFace Utility Library" @set #FileVersion="0.1.2.0" @set #Comments="" Basically the batch file renames this file to version.bat, calls it, then renames it back to version.txt afterwards. The Problem When I modify version.txt (e.g. to increment the file version), and then press F7, the build is not seen as out-of-date, so the post-build step is not executed, so the DLL's version doesn't get updated. I really want to include the .txt file as an input to the build, but without anything actually trying to use it. If I #include the .txt file from a CPP file in the project, the compiler fails because it obviously doesn't understand what "@set" means. If I add /* ... */ comments around the @set commands, then the batch file has some syntax errors but eventually succeeds. But this is a poor solution I think. So... how would you do it?

    Read the article

  • problem with treeview

    - by Xaver
    I want to configure a treeview so that when all checkboxes of a parent are checked, then the parent checkbox is checked. And when all checkboxes are unchecked, the parent checkbox is unchecked. Does the treeview class have a standard property for that?

    Read the article

  • How to allow multiple users to manage application running on server?

    - by Mary-Chan
    I'm not sure if the title makes sense. Hard question to ask. I have an application running on a server under my network account, and it's scheduled to run daily. I can remote in with my user credentials and check on the application. What if I want more than one person to be able to remote in and check it? I can create a new account on the server, but it wouldn't have network rights and the application needs access to network folders. What would be the best approach? Thanks! :-) P.S. Feel free to edit the tags. I can't figure out what to pick.

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • Make Visual Studio to show All Compile Errors!

    - by BDotA
    Visual studio does not show all the compile errors at once. for example one time it says I have two errors and when I fix them then 102 more compile errors are showing up and these new errors are not dependent on those two previous errors. How can we tell it to go through all the code and show all compile errors at once

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >