Search Results

Search found 24517 results on 981 pages for 'visual studo 2008'.

Page 275/981 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • Generating the query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • ssh tunneling with visualsvn

    - by DeveloperChris
    I have been asked to setup visualsvn for visual studio 2008 Due to firewall restrictions and server configuration. I need to use ssh tunneling. My problem is this. The local machine needs to connect to a gateway machine via ssh then connect to the subversion server so Local machine ---{ssh}--- gateway ---{ssh}-- subversion server I am not exactly sure of the correct process to do this. It appears that I must start a ssh process using plink to open a local port and forward that to the remote subversion server. eg: plink user@gateway -L 22:192.168.1.1:22 Then when visualsvn starts it uses tortoiseplink to make the actual connection through to the subversion server using svn+ssh://username@localhost:22/myrepo This seems very very clunky. firstly it needs several steps to setup the connection secondly I need plink running which leaves a command prompt on the desktop (clutter = yuck) lastly I need to use two different programs that do the same thing. (plink + tortoiseplink) The problem is that tortoiseplink doesn't run in the background. As soon as I connect to the ssh gateway and enter the password it closes again. So I can't use it to create the initial connection. If I use plink instead of tortoiseplink in visualsvn then I never get prompted for the password. so it just hangs with an open command prompt and no password request. Is there a way to setup visualsvn so that everything happens in one command line? I have searched high and low for a suitable and clean method to tunnel from visualsvn to the remote server and have found very little. it all either assumes one hop (not two like mine) or it glosses over all the hard bits. DC

    Read the article

  • Select the latest record for each category linked available on an object

    - by Simpleton
    I have a tblMachineReports with the columns: Status(varchar),LogDate(datetime),Category(varchar), and MachineID(int). I want to retrieve the latest status update from each category for every machine, so in effect getting a snapshot of the latest statuses of all the machines unique to their MachineID. The table data would look like Category - Status - MachineID - LogDate cata - status1 - 001 - date1 cata - status2 - 002 - date2 catb - status3 - 001 - date2 catc - status2 - 002 - date4 cata - status3 - 001 - date5 catc - status1 - 001 - date6 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 Restated, I have multiple machines reporting on multiple statuses in this tblMachineReports. All the rows are created through inserts, so their will obviously be duplicate entries for machines as new statuses come in. None of the columns can be predicted, so I can't do any ='some hard coded string' comparisons in any part of the select statement. For the sample table I provided, the desired results would look like: Category - Status - MachineID - LogDate catc - status2 - 002 - date4 cata - status3 - 001 - date5 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 What would the select statement look like to achieve this, getting the latest status for each category on each machine, using MS SQL Server 2008? I have tried different combinations of subqueries combined with aggregate MAX(LogDates)'s, along with joins, group bys, distincts, and what-not, but have yet to find a working solution.

    Read the article

  • Why does text from Assembly.GetManifestResourceStream() start with three junk characters?

    - by flipdoubt
    I have a SQL file added to my VS.NET 2008 project as an embedded resource. Whenever I use the following code to read the file's content, the string returned always starts with three junk characters and then the text I expect. I assume this has something to do with the Encoding.Default I am using, but that is just a guess. Why does this text keep showing up? Should I just trim off the first three characters or is there a more informed approach? public string GetUpdateRestoreSchemaScript() { var type = GetType(); var a = Assembly.GetAssembly(type); var script = "UpdateRestoreSchema.sql"; var resourceName = String.Concat(type.Namespace, ".", script); using(Stream stream = a.GetManifestResourceStream(resourceName)) { byte[] buffer = new byte[stream.Length]; stream.Read(buffer, 0, buffer.Length); // UPDATE: Should be Encoding.UTF8 return Encoding.Default.GetString(buffer); } } Update: I now know that my code works as expected if I simply change the last line to return a UTF-8 encoded string. It will always be true for this embedded file, but will it always be true? Is there a way to test any buffer to determine its encoding?

    Read the article

  • How do I eliminate Error 3002?

    - by Andrew
    Say I have the following table definitions in SQL Server 2008: CREATE TABLE Person (PersonId INT IDENTITY NOT NULL PRIMARY KEY, Name VARCHAR(50) NOT NULL, ManyMoreIrrelevantColumns VARCHAR(MAX) NOT NULL) CREATE TABLE Model (ModelId INT IDENTITY NOT NULL PRIMARY KEY, ModelName VARCHAR(50) NOT NULL, Description VARCHAR(200) NULL) CREATE TABLE ModelScore (ModelId INT NOT NULL REFERENCES Model (ModelId), Score INT NOT NULL, Definition VARCHAR(100) NULL, PRIMARY KEY (ModelId, Score)) CREATE TABLE PersonModelScore (PersonId INT NOT NULL REFERENCES Person (PersonId), ModelId INT NOT NULL, Score INT NOT NULL, PRIMARY KEY (PersonId, ModelId), FOREIGN KEY (ModelId, Score) REFERENCES ModelScore (ModelId, Score)) The idea here is that each Person may have only one ModelScore per Model, but each Person may have a score for any number of defined Models. As far as I can tell, this SQL should enforce these constraints naturally. The ModelScore has a particular "meaning," which is contained in the Definition. Nothing earth-shattering there. Now, I try translating this into Entity Framework using the designer. After updating the model from the database and doing some editing, I have a Person object, a Model object, and a ModelScore object. PersonModelScore, being a join table, is not an object but rather is included as an association with some other name (let's say ModelScorePersonAssociation). The mapping details for the association are as follows: - Association - Maps to PersonModelScore - ModelScore ModelId : Int32 <=> ModelId : int Score : Int32 <=> Score : int - Person PersonId : Int32 <=> PersonId : int On the right-hand side, the ModelId and PersonId values have primary key symbols, but the Score value does not. Upon compilation, I get: Error 3002: Problem in Mapping Fragment starting at line 5190: Potential runtime violation of table PersonModelScore's keys (PersonModelScore.ModelId, PersonModelScore.PersonId): Columns (PersonModelScore.PersonId, PersonModelScore.ModelId) are mapped to EntitySet ModelScorePersonAssociation's properties (ModelScorePersonAssociation.Person.PersonId, ModelScorePersonAssociation.ModelScore.ModelId) on the conceptual side but they do not form the EntitySet's key properties (ModelScorePersonAssociation.ModelScore.ModelId, ModelScorePersonAssociation.ModelScore.Score, ModelScorePersonAssociation.Person.PersonId). What have I done wrong in the designer or otherwise, and how can I fix the error? Many thanks!

    Read the article

  • How to open a help webpage after installation is complete

    - by Kapil
    I have built an installer for my windows app using the VisualStudio 2008 IDE. I also use some custom-actions to do a few extra stuff at the time of installation/uninstallation. What I also want to do is that when the installation is completed, the installer should launch a help webpage or a getting-started page for the users to know how to go ahead with the app. I am doing the following for this: private void CustomInstaller_Committed(object sender, InstallEventArgs e) { System.Windows.Forms.LinkLabel lbl = new System.Windows.Forms.LinkLabel(); lbl.Links.Remove(lbl.Links[0]); lbl.Links.Add(0, lbl.Text.Length, "http://www.mywebsite.com/help.aspx"); ProcessStartInfo pInfo = new ProcessStartInfo(lbl.Links[0].LinkData.ToString()); Process.Start(pInfo); } Also, I set the event handler as such: this.Committed += new InstallEventHandler(CustomInstaller_Committed); This is launching the webpage, but not at the right point in the flow where I would have wanted it. It launches the IE browser even before the user dismisses the installation window (i.e clicks 'Close' in the Installation Completed dialog box). I want the webpage to open only when the user finally dismisses the installation. Any ideas how can I achieve this? Thanks, Kapil

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Limit the number of rows returned on the server side (forced limit)

    - by evolve
    So we have a piece of software which has a poorly written SQL statement which is causing every row from a table to be returned. There are several million rows in the table so this is causing serious memory issues and crashes on our clients machine. The vendor is in the process of creating a patch for the issue, however it is still a few weeks out. In the mean time we were attempting to figure out a method of limiting the number of results returned on the server side just as a temporary fix. I have no real hope of there being a solution, I've looked around and don't really see any ways of doing this, however I'm hoping someone might have an idea. Thank you in advance. EDIT I forgot an important piece of information, we have no access to the source code so we can not change this on the client side where the SQL statement is formed. There is no real server side component, the client just accesses the database directly. Any solution would basically require a procedure, trigger, or some sort of SQL-Server 2008 setting/command.

    Read the article

  • DataTableReader is invalid for current DataTable 'TempTable'

    - by Sk93
    Hi, I'm getting the following error whenever my code creates a DataTableReader from a valid DataTable Object: "DataTableReader is invalid for current DataTable 'TempTable'." The thing is, if I reboot my machine, it works fine for an undertimed amount of time, then dies with the above. The code that throws this error could have been working fine for hours and then: bang. you get this error. It's not limited to one line either; it's every single location that a DataTableReader is used. Also, this error does NOT occur on the production web server - ever. This is an example of one of the lines where it falls over: If I step over this line, I get this: However, if I do this in the immediate window: I get no problems. Same goes if I actually use that line in the code. This has been driving me nuts for the best part of a week, and I've failed to find anything on Google that could help (as I'm pretty positive this isn't a coding issue). Some technical info: DEV Box: Vista 32bit (with all current windows updates) Visual Studio 2008 v9.0.30729.1 SP dotNet Framework 3.5 SP1 SQL Server: Microsoft SQL Server 2005 Standard Edition- 9.00.4035.00 (X64) Windows 2003 64bit (with all current windows updates) Web Server: Windows 2003 64bit (with all current windows updates) any help, ideas, or advice would be greatly appreciated! Cheers, Ian

    Read the article

  • SqlException: User does not have permission to perform this action.

    - by Oskar Kjellin
    I have been using my website (ASP.NET MVC) in visual studio but now I want to host it on my server. I published from visual studio onto the network share to be used. The server is running Windows Home Server, IIS 6 and SQL Server 2008 R2 (express). In Microsoft SQL Server Management Studio, I've attached the database and made sure that the user IUSR_SERVER is owner of the db. I also made sure that the user Network Service has access. The Web Site is configured in IIS to run anonymously as IUSR_SERVER. I have granted write and read access to IUSR_SERVER as well as Network Service and made sure that nothing is read only. The web.config has this connectionstring: <connectionStrings> <remove name="ApplicationServices" /> <add name="ApplicationServices" connectionString="Data Source=.\SQLExpress;Integrated Security=True;Initial Catalog=MyDatebase" providerName="System.Data.SqlClient" /> </connectionStrings> However, I cannot browse my web site. I only get this error: Server Error in '/' Application. User does not have permission to perform this action. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: User does not have permission to perform this action. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): User does not have permission to perform this action.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846887 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 Feels like I've tried everything. Would be very grateful for your aid in this.

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • Generating the SQL query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • LinqToSQL Double Insert issue

    - by Vaccano
    I have a WCF service with an object structure similar to this: public class MyClass { public List<MySubItem> SubItems { get; set; } } public class MySubItem { public List<MySubSubItem> SubSubItems { get; set; } } public class MySubSubItem { public string DataValue { get; set; } } public class MyClassDAL { public void InsertMyClass(MyClass myClass) { ctx.MyClasses.InsertOnSubmit(myClass); ctx.SubmitChanges(); } } Sometimes my client will call in with a MyClass that submits only half of the values that it has in the list SubSubItems. Later it calls the insert with the rest of the list. The problem is that when it does this I get a Primary Key violation. The reason is that it is trying to insert the MySubItem again (because there are more items in the SubSubItems owned by the same MySubItem object). How do I deal with this? Do I just call an Update? Do I have to try to separate them out (updates from inserts)? SQL Server 2008 has a really cool Merge functionally. Is there some way to access that from LinqToSQL?

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Automatically add links to class source files under a specified directory of another project in Visu

    - by Binary255
    I want to share some class source files between two projects in Visual Studio 2008. I can't create a project for the common parts and reference it (see my comment if you are curious to why). I've managed to share some source files, but it could be a lot more neat. I've created a test solution called Commonality. The Solution Explorer of the Commonality solution which contains project One and Two: What I like: All class files under the Common folder of project One are automatically added to project Two by linking. It's mostly the same as if I would have chosen Add / Existing Item... : Add As Link on each new class source file. It's clear that these files have been linked in. The shortcut arrow symbol is marking each file icon. What I do not like: The file and folder tree structure under Common of project One isn't included. It's all flat. The linked source files are shown under the project root of project Two. It would look much less cluttered if they were located under Common like in project One. The file tree structure of the Commonality solution which contains project One and Two: $ tree /F /A Folder PATH listing for volume Cystem Volume serial number is 0713370 1337:F6A4 C:. | Commonality.sln | +---One | | One.cs | | One.csproj | | | +---bin | | \---Debug | | One.vshost.exe | | One.vshost.exe.manifest | | | +---Common | | | Common.cs | | | CommonTwo.cs | | | | | \---SubCommon | | CommonThree.cs | | | +---obj | | \---Debug | | +---Refactor | | \---TempPE | \---Properties | AssemblyInfo.cs | \---Two | Two.cs | Two.csproj | Two.csproj.user | Two.csproj~ | +---bin | \---Debug +---obj | \---Debug | +---Refactor | \---TempPE \---Properties AssemblyInfo.cs And the relevant part of project Two's project file Two.csproj: <ItemGroup> <Compile Include="..\One\Common\**\*.cs"> </Compile> <Compile Include="Two.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> How do I address what I do not like, while keeping what I like?

    Read the article

  • VERY simple C program won't compile with VC 64

    - by Paperflyer
    Here is a very simple C program: #include <stdio.h> int main (int argc, char *argv[]) { printf("sizeof(short) = %d\n",(int)sizeof(short)); printf("sizeof(int) = %d\n",(int)sizeof(int)); printf("sizeof(long) = %d\n",(int)sizeof(long)); printf("sizeof(long long) = %d\n",(int)sizeof(long long)); printf("sizeof(float) = %d\n",(int)sizeof(float)); printf("sizeof(double) = %d\n",(int)sizeof(double)); return 0; } While it compiles fine on Win32 (command line: cl main.c), it does not using the Win64 compiler ("c:\Program Files(x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\cl.exe" main.c). Specifically, it sais "error LNK2019: unresolved external symbol printf referenced in function main". As far as I understand this, it can not link to printf, right? Obviously, I have Microsoft Visual C++ Compiler 2008 (Standard enu) x86 and x64 installed and am using the 64-bit flavor of Windows (7). What is the problem here?

    Read the article

  • Temporary debug releases and final application releases

    - by baron
    I have a quick question regarding debug and release in VS 2008. I have an app i've been working on - its not yet complete but the bulk of the functionality is there. So basically i'm trying to give a copy of it now to the person helping with documentation - just so they can have a play and get the feel for what i've made. Now the question is how to provide it to them. I was told to just copy the .exe out of the debug/bin folder and put that onto USB. But when testing, if I run this .exe anywhere else (outside of this folder) it crashes. I've now worked out why this is: var path = ConfigurationManager.AppSettings["PathToUse"]; var files = Directory.GetFiles(path); throws a null reference, so that App.config file is not being used. If I copy that file in with the .exe it works again. So actually my question is regarding the best way to manage this situation. What is the best way to provide a working copy to people, and, is there a reference on preparing apps for release - so everything is packaged together and installed in a clean structured folder heirarchy?

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • WCF Service instead of ASMX Web Service?

    - by wchrisjohnson
    I'm writing a SOAP Server that will act as an endpoint for an external client. The external client expects SOAP 1.1. I'll be taking embedded business objects in the SOAP messages and passing them to an internal application, getting responses back and responding with SOAP messages to the eternal client. I did the traditional ASMX based web services several years ago. Now, I've been exploring WCF Services and wondering the best approach to take. 1) Should WCF be considered a superset of ASMX web services? 2) Is there any reason to still write new web services using ASMX instead of WCF? 3) Does WCF provide better facilities for working with SOAP messages, as opposed to SOAP Extensions? 4) Can I restrict communication to SOAP 1.1 using WCF, the way I can with a web.config change in ASMX? 5) Does WCF have an easy way to log or review the requests that hit the service without resorting to something like SOAP extensions? Sorry my questions are not very specific; still trying to get handle on what I need to know... Using VS2008, Windows Server 2008. Chris

    Read the article

  • Finding values from a table that are *not* in a grouping of another table and what group that value

    - by Bkins
    I hope I am not missing something very simple here. I have done a Google search(es) and searched through stackoverflow. Here is the situation: For simplicity's sake let's say I have a table called "PeoplesDocs", in a SQL Server 2008 DB, that holds a bunch of people and all the documents that they own. So one person can have several documents. I also have a table called "RequiredDocs" that simply holds all the documents that a person should have. Here is sort of what it looks like: PeoplesDocs: PersonID DocID -------- ----- 1 A 1 B 1 C 1 D 2 C 2 D 3 A 3 B 3 C RequiredDocs: DocID DocName ----- --------- A DocumentA B DocumentB C DocumentC D DocumentD How do I write a SQL query that returns some variation of: PersonID MissingDocs -------- ----------- 2 DocumentA 2 DocumentB 3 DocumentD I have tried, and most of my searching has pointed to, something like: SELECT DocID FROM DocsRequired WHERE NOT EXIST IN ( SELECT DocID FROM PeoplesDocs) but obviously this will not return anything in this example because everyone has at least one of the documents. Also, if a person does not have any documents then there will be one record in the PeoplesDocs table with the DocID set to NULL. Thank you in advance for any help you can provide, Ben

    Read the article

  • Static variable not initialized

    - by Simon Linder
    Hi all, I've got a strange problem with a static variable that is obviously not initialized as it should be. I have a huge project that runs with Windows and Linux. As the Linux developer doesn't have this problem I would suggest that this is some kind of wired Visual Studio stuff. Header file class MyClass { // some other stuff here ... private: static AnotherClass* const Default_; }; CPP file AnotherClass* const Default_(new AnotherClass("")); MyClass(AnotherClass* const var) { assert(Default_); ... } Problem is that Default_is always NULL. I also tried a breakpoint at the initialization of that variable but I cannot catch it. There is a similar problem in another class. CPP file std::string const MyClass::MyString_ ("someText"); MyClass::MyClass() { assert(MyString_ != ""); ... } In this case MyString_is always empty. So again not initialized. Does anyone have an idea about that? Is this a Visual Studio settings problem? Cheers Simon

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >