Search Results

Search found 144001 results on 5761 pages for 'sql server data tools'.

Page 148/5761 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Does your team develop their supporting tools or this should be outsourced out of it?

    - by Pierre 303
    By supporting tools, I mean: reference data manager, like virus definition for anti-virus software test data generator level builders for games simulators or advanced mocking systems Does the team building the core product (in the case above, the game or the anti-virus) should be part of the development of the supporting tools significantly, or this is a task you would outsourced out of the team to help it focus on the product? I don't have enough experience to evaluate the pros & cons of each, so I'm hopping you would come up with personal experiences to share, or even studies or papers you read on the subject.

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • How can I perform sentiment analysis on extracted text from online sources?

    - by aniket69
    I'm working on extracting the sentiment from YouTube comments, blogs, news content, Facebook wall posts, and Twitter feeds. I'm looking for an automated way to do this: the two third-party solutions I've found have been AlchemyAPI and RapidMiner. Are these the best way to approach this project, or should I be using something else? Is there a more efficient way to approach sentiment analysis? What techniques have worked for you in a project like this?

    Read the article

  • Installing SQL Server 2012 on Windows 2012 Server

    - by andyleonard
    In Want to Learn SQL Server 2012? I wrote about obtaining a fully-featured version of SQL Server 2012 (Developer Edition). This post represents one way to install SQL Server 2012 Developer Edition on a Hyper-V virtual machine running the Windows 2012 Server Standard Edition operating system. This is by no means exhaustive. My goal in writing this is to help you get a default instance of SQL Server 2012 up and running. I do not cover setting up the Hyper-V virtual machine. I begin after loading the...(read more)

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • Why would urls submitted in google webmaster tools drop to 0?

    - by ambient
    Why would urls submitted in google webmaster tools drop to 0? It's a small site, only like 20 pages, I submitted the xml sitemap and for about a week it said 20 urls submitted. A day or so ago it indexed about 17 of the pages, but today when looking it not only says that 0 are indexed but also 0 have been submitted. I did a site search on google and found clearly that pages are indexed, is this just an error on google webmaster tools? Any help or thoughts would be appreciated. Thanks!

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • Import IIS log into SQL Server 2008 error

    - by Vivek Chandraprakash
    I'm trying to import IIS logs into SQL Server 2008. I get this error below. Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column "cs(User-Agent)" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) I tried changing the column width of user agent to varchar(8000) and nvarchar(4000) no luck. pls help -Vivek

    Read the article

  • SQL Server deadlocks between select/update or multiple selects

    - by RobW
    All of the documentation on SQL Server deadlocks talks about the scenario in which operation 1 locks resource A then attempts to access resource B and operation 2 locks resource B and attempts to access resource A. However, I quite often see deadlocks between a select and an update or even between multiple selects in some of our busy applications. I find some of the finer points of the deadlock trace output pretty impenetrable but I would really just like to understand what can cause a deadlock between two single operations. Surely if a select has a read lock the update should just wait before obtaining an exclusive lock and vice versa? This is happening on SQL Server 2005 not that I think this makes a difference.

    Read the article

  • SQL Server: Profiling statements inside a User-Defined Function

    - by Craig Walker
    I'm trying to use SQL Server Profiler (2005) to track down some application performance problems. One of the calls being made is to a table-valued user-defined function. This function wraps a select that joins several tables together. In SQL Server Profiler, the call to the UDF is logged. However, the select that underlies the UDF isn't being logged at all. Because of this, I'm not getting useful data on which tables & indexes are being hit. I'd like to feed this info into the Database Tuning Advisor for some indexing advice. Is there any way (short of unwrapping the queries themselves) to log the tables called by UDFs in Profiler?

    Read the article

  • How do I keep a table up to date across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • SubSonic 2.x now supports TVP's - SqlDbType.Structure / DataTables for SQL Server 2008

    - by ElHaix
    For those interested, I have now modified the SubSonic 2.x code to recognize and support DataTable parameter types. You can read more about SQL Server 2008 features here: http://download.microsoft.com/download/4/9/0/4906f81b-eb1a-49c3-bb05-ff3bcbb5d5ae/SQL%20SERVER%202008-RDBMS/T-SQL%20Enhancements%20with%20SQL%20Server%202008%20-%20Praveen%20Srivatsav.pdf What this enhancement will now allow you to do is to create a partial StoredProcedures.cs class, with a method that overrides the stored procedure wrapper method. A bit about good form: My DAL has no direct table access, and my DB only has execute permissions for that user to my sprocs. As such, SubSonic only generates the AllStructs and StoredProcedures classes. The SPROC: ALTER PROCEDURE [dbo].[testInsertToTestTVP] @UserDetails TestTVP READONLY, @Result INT OUT AS BEGIN SET NOCOUNT ON; SET @Result = -1 --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] ON INSERT INTO [dbo].[tbl_TestTVP] ( [GroupInsertID], [FirstName], [LastName] ) SELECT [GroupInsertID], [FirstName], [LastName] FROM @UserDetails IF @@ROWCOUNT > 0 BEGIN SET @Result = 1 SELECT @Result RETURN @Result END --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] OFF END The TVP: CREATE TYPE [dbo].[TestTVP] AS TABLE( [GroupInsertID] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL ) GO The the auto gen tool runs, it creates the following erroneous method: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(string UserDetails, int? Result) { SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); sp.Command.AddParameter("@UserDetails", UserDetails, DbType.AnsiString, null, null); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } It sets UserDetails as type string. As it's good form to have two folders for a SubSonic DAL - Custom and Generated, I created a StoredProcedures.cs partial class in Custom that looks like this: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(DataTable dt, int? Result) { DataSet ds = new DataSet(); SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); // TODO: Modify the SubSonic code base in sp.Command.AddParameter to accept // a parameter type of System.Data.SqlDbType.Structured, as it currently only accepts // System.Data.DbType. //sp.Command.AddParameter("@UserDetails", dt, System.Data.SqlDbType.Structured null, null); sp.Command.AddParameter("@UserDetails", dt, SqlDbType.Structured); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } As you can see, the method signature now contains a DataTable, and with my modification to the SubSonic framework, this now works perfectly. I'm wondering if the SubSonic guys can modify the auto-gen to recognize a TVP in a sproc signature, as to avoid having to re-write the warpper? Does SubSonic 3.x support Structured data types? Also, I'm sure many will be interested in using this code, so where can I upload the new code? Thanks.

    Read the article

  • ssms cannot connect to default sql server instance without specifying port number

    - by Oliver
    I have multiple SQL Server 2005 instances on a box. From SSMS on my desktop I can connect to that box's named instances with no problem. After some recent network configuration changes, when I want to connect to the default instance from SSMS on my desktop, I have to specify the port number. Before the network changes, I did not have to specify the port number of the default instance. If I remote to any other box (including the one in question), and use that box's SSMS to connect to that default instance, success. From my desktop, and only from my desktop, I have to specify the port number. Is it a SQL Server configuration that I've missed? Is it possible something in my PC's configuration is getting in the way? Where would I look, or what could I pass on to the network folks to help them resolve this? Any help is appreciated.

    Read the article

  • How do I insert into a unique key into a table?

    - by Ben McCormack
    I want to insert data into a table where I don't know the next unique key that I need. I'm not sure how to format my INSERT query so that the value of the Key field is 1 greater than the maximum value for the key in the table. I know this is a hack, but I'm just running a quick test against a database and need to make sure I always send over a Unique key. Here's the SQL I have so far: INSERT INTO [CMS2000].[dbo].[aDataTypesTest] ([KeyFld] ,[Int1]) VALUES ((SELECT Max([KeyFld]) FROM [dbo].[aDataTypesTest]) + 1 ,1) which errors out with: Msg 1046, Level 15, State 1, Line 5 Subqueries are not allowed in this context. Only scalar expressions are allowed. I'm not able to modify the underlying database table. What do I need to do to ensure a unique insert in my INSERT SQL code?

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • SQL Server replication - how to sync tables from internal database to read-only website database

    - by frankadelic
    I have an internal SQL Server 2005 database "ADMIN_DATA" that is used by admin users. We would like to sync three of the database tables in ADMIN_DATA out to another SQL Server 2005 database "WEB_DATA", which is used by a public web app. WEB_DATA is read-only - only SELECT statements are allowed, while ADMIN_DATA is updated all the time. What is the best solution? How can this be accomplished with minimal custom coding and/or changes to database tables? Notes: ADMIN_DATA and WEB_DATA are different physical machines and on different subnets. The syncing operation doesn't need to be instantaneous.

    Read the article

  • Debugging stored procedures, without using SSMS 2008 Debugger, or the Visual Studio debugger (output

    - by Albert
    I have a SQL Server 2005 database with some Stored Procedures (SP) that I would like to debug...essentially I would just like to check variable values at certain points throughout the SP execution. I have SSMS 2008, but when I try to use the debugger, I get an error that it can't debug SQL Server 2005 databases. And I can't use the Visual Studio debugger (by stepping into the SP via Server Explorer) because Remote Debugging is blocked by our firewall, and I'm rightfully not allowed to touch the firewall. So my question is how can I check variable values at certain points in the SP execution? Is there some way to output those values somewhere, perhaps along with some text?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >