Search Results

Search found 47492 results on 1900 pages for 'sql management studio'.

Page 288/1900 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • Why are there connections open to my databases?

    - by Everett
    I have a program that stores user projects as databases. Naturally, the program should allow the user to create and delete the databases as they need to. When the program boots up, it looks for all the databases in a specific SQLServer instance that have the structure the program is expecting. These database are then loaded into a listbox so the user can pick one to open as a project to work on. When I try to delete a database from the program, I always get an SQL error saying that the database is currently open and the operation fails. I've determined that the code that checks for the databases to load is causing the problem. I'm not sure why though, because I'm quite sure that all the connections are being properly closed. Here are all the relevant functions. After calling BuildProjectList, running "DROP DATABASE database_name" from ExecuteSQL fails with the message: "Cannot drop database because it is currently in use". I'm using SQLServer 2005. private SqlConnection databaseConnection; private string connectionString; private ArrayList databases; public ArrayList BuildProjectList() { //databases is an ArrayList of all the databases in an instance if (databases.Count <= 0) { return null; } ArrayList databaseNames = new ArrayList(); for (int i = 0; i < databases.Count; i++) { string db = databases[i].ToString(); connectionString = "Server=localhost\\SQLExpress;Trusted_Connection=True;Database=" + db + ";"; //Check if the database has the table required for the project string sql = "select * from TableExpectedToExist"; if (ExecuteSQL(sql)) { databaseNames.Add(db); } } return databaseNames; } private bool ExecuteSQL(string sql) { bool success = false; openConnection(); SqlCommand cmd = new SqlCommand(sql, databaseConnection); try { cmd.ExecuteNonQuery(); success = true; } catch (SqlException ae) { MessageBox.Show(ae.Message.ToString()); } closeConnection(); return success; } public void openConnection() { databaseConnection = new SqlConnection(connectionString); try { databaseConnection.Open(); } catch(Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } public void closeConnection() { if (databaseConnection != null) { try { databaseConnection.Close(); } catch (Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } }

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Multiple Connection Types for one Designer Generated TableAdapter

    - by Tim
    I have a Windows Forms application with a DataSet (.xsd) that is currently set to connect to a Sql Ce database. Compact Edition is being used so the users can use this application in the field without an internet connection, and then sync their data at day's end. I have been given a new project to create a supplemental web interface for displaying some of the same reports as the Windows Forms application so certain users can obtain reports without installing the Windows app. What I've done so far is create a new Web Project and added it to my current Solution. I have split both the reports (.rdlc) and DataSets out of the Windows Forms project into their own projects so they can be accessed by both the Windows and Web applications. So far, this is working fine. Here's my dilemma: As I said before, the DataSets are currently set up to connect to a local Sql Ce database file. This is correct for the Windows app, but for the Web application I would like to use these same TableAdapters and queries to connect to the Sql Server 2005 database. I have found that the designer generated, strongly-typed TableAdapter classes have a ConnectionModifier property that allows you to make the TableAdapter's Connection public. This exposes the Connection property and allows me to set it, however it is strongly-typed as a SqlCeConnection, whereas I would like to set it to a SqlConnection for my Web project. I'm assuming the DataSet Designer strongly-types the Connection, Command, and DataAdapter objects based on the Provider of the ConnectionString as indicated in the app.config file. Is there any way I can use some generic provider so that the DataSet Designer will use object types that can connect to both a Sql Ce database file AND the actual Sql Server 2005 database? I know that SqlCeConnection and SqlConnection both inherit from DbConnection, which implements IDbConnection. Relatively, the same goes for SqlCeCommand/SqlCommand:DbCommand:IDbCommand. It would be nice if I could just figure out a way for the designer to use the Interface types rather than the strong types, but I'm hesitant that that is possible. I hope my problem and question are clear. Any help is much appreciated. Let me know if there's anything I can clarify.

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • SQL Triggers and when or when not to use them.

    - by John Mitchell
    When I was originally learning about SQL I was always told, only use triggers if you really need to and opt to use stored procedures instead if possible. Now unfortunately at the time (a good few years ago) I wasn't as curious and caring about fundamentals as I am now so never did ask to the reason why. What's the communities opinion in this? Is it just someone's personal preference, or should triggers be avoided (just like cursors) unless there is a good reason for them.

    Read the article

  • Is there a recommended approach for using SQL Server as an Authorization store and extending AD properties using .Net? [closed]

    - by Jim
    We are going to be using SQL Server as an authorization store for our .Net windows services and WCF services as well as storing additional metadata about users and groups to extend the AD properties. Doing this will make this self service and not require IT to change anything for our department (for users or groups). What if any are the existing recommended stategies or technologies that do this function?

    Read the article

  • Management - Level 9 in the Stairway to Reporting Services

    In the last article of the series, you will learn how to manage your reports once you've finished development, including how to use the Report Manager, deploy reports, and send reports to the appropriate end users. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • Painless management of a logging table in SQL Server

    Tables that log a record of what happens in an application can get very large, easpecially if they're growing by half a billion rows a day. You'll very soon need to devise a scheduled routine to remove old records, but the DELETE statement just isn't a realistic option with that volume of data. Hugo Kornelis explains a pain-free technique for SQL Server. Top 5 hard-earned Lessons of a DBA New! Part 4, ‘Disturbing Development’ by Grant Fritchey, features the return of Joe Deebeeay and a server-threatening encounter with ORMs - read it here

    Read the article

  • Cannot connect to a SQL Server 2005 Analysis Services cube after installing SQL Server 2008 SP1.

    - by Luc
    I've been developing an application that talks directly to an SSAS 2005 OLAP cube. Note that I also have SQL Server 2008 installed, so the other day I did a Windows Update and decided to include SQL Server 2008 SP1 in my update. After doing that, my SSAS 2005 cube is no longer accessible from my application. I'm able to browse the data just fine within SQL Server 2005 BI Studio Manager, but I'm not able to connect to the cube from my application. Here is my connection string that used to work: Data Source=localhost;Provider=msolap;Initial Catalog=Adventure Works DW Here is the error message I get: Either the user, [Server]/[User], does not have access to the Adventure Works DW database, or the database does not exist. Here is the beginning of my stack trace if it would help: Microsoft.AnalysisServices.AdomdClient.AdomdErrorResponseException was unhandled by user code HelpLink="" Message="Either the user, Luc-PC\\Luc, does not have access to the Adventure Works DW database, or the database does not exist." Source="Microsoft SQL Server 2005 Analysis Services" ErrorCode=-1055391743 StackTrace: at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IDiscoverProvider.Discover(String requestType, IDictionary restrictions, DataTable table) at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.Discover(AdomdConnection connection, String requestType, ListDictionary restrictions, DataTable destinationTable, Boolean doCreate) at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.PopulateSelf() at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.Microsoft.AnalysisServices.AdomdClient.IObjectCache.Populate() at Microsoft.AnalysisServices.AdomdClient.CacheBasedNotFilteredCollection.PopulateCollection() at Microsoft.AnalysisServices.AdomdClient.CacheBasedNotFilteredCollection.get_Count() at Microsoft.AnalysisServices.AdomdClient.CubesEnumerator.MoveNext() at Microsoft.AnalysisServices.AdomdClient.CubeCollection.Enumerator.MoveNext() at blah blah... I've looked for a solution for the last 4+ hours and haven't had any success. Thanks in advance for any help. Luc

    Read the article

  • Javascript new keyword and memory management

    - by Whyamistilltyping
    Coming from C++ it is hard grained into my mind that everytime I call new I call delete. In javascript I find myself calling new occasionally in my code but (hoping) the garbage collection functionality in the browser will take care of the mess for me. I don't like this - is there a 'delete' method in javascript and is how I use it different from in C++? Thanks.

    Read the article

  • Display a Photo Gallery using Asp.Net and SQL

    - by sweetcoder
    I have recently added photos to my SQL database and have displayed them on an *.aspx page using Asp:Image. The ImageUrl for this control stored in a separate *.aspx page. It works great for profile pictures. I have a new issue at hand. I need each user to be able to have their own photo gallery page. I want the photos to be stored in the sql database. Storing the photos is not difficult. The issue is displaying the photos. I want the photos to be stored in a thumbnail grid fashion, when the user clicks on the photo, it should bring up the photo on a separate page. What is the best way to do this. Obviously it is not to use Asp:Image. I am curious if I should use a Gridview. If so, how do I do that and should their be a thumbnail size stored in the database for this? Once the picture is click on how does the other page look so that it displays the correct image. I would think it is not correct to send the photoId through the url. Below is code from the page I use to display profile pictures. protected void Page_Load(object sender, EventArgs e) { string sql = "SELECT [ProfileImage] FROM [UserProfile] WHERE [UserId] = '" + User.Identity.Name.ToString() + "'"; string strCon = System.Web.Configuration.WebConfigurationManager.ConnectionStrings["SocialSiteConnectionString"].ConnectionString; SqlConnection conn = new SqlConnection(strCon); SqlCommand comm = new SqlCommand(sql, conn); conn.Open(); Response.ContentType = "image/jpeg"; Response.BinaryWrite((byte[])comm.ExecuteScalar()); conn.Close(); }

    Read the article

  • Issues Connecting to SQLExpress using Oracle SQL Developer

    - by ArtDeveloper
    Hey Guys, I'm trying to create a connection inside Oracle SQL Developer to a SQLExpress database I have Everything I have resides on the same machine so there isn't any network issues I should have to deal with but everytime I follow the instructions and I try to connect I get the following message "Failure - Unable to get information from SQL Server: localhost." I can connect to the SQLExpress DB using the SQL Management Studio and through an ODBC connection. I've installed the third party extensions and I've enabled the TCP protocol on the SQL Server Configuration manager as well as enabled the IP Addresses I'm assuming that the SQLExpress Database is on port 1433 because I didn't change this when I installed but if someone can tell me how to double check that I would appreciate that info as well. I setup the new connection with the following information name: databasename I'm using windows authentication so the username and password aren't filled in host:localhost port:1433/databasename;instance=SQLEXPRESS *databasename - this is replaced with the actual DB name I've just changed the name here to protect the innocent I've spent about a full day on this trying to get it connected and many google attempts where other ppl have had this issue but have gotten it solved through various methods that I've tried and it hasn't resolved my issue. Any information would be much appreciated Thank you in Advance, AD

    Read the article

  • User management for custom application built on google apps

    - by Ali
    Hi guys, I'm modifying our collaboration system so it can be listed on google applications. A small issue I'm facing is the registering of user details. By default whenever someone logs into their google Apps account they pretty much are logged into the application. For every action taken by a registered login in user I store the user ID of that signed in user whenever an update is made in the database. However the google apps user sign in process is different in this respect that there isn't anything visible as a user ID for me to work with. Any ideas?

    Read the article

  • User management for google apps

    - by Ali
    Hi guys, I'm modifying our collaboration system so it can be listed on google applications. A small issue I'm facing is the registering of user details. By default whenever someone logs into their google Apps account they pretty much are logged into the application. For every action taken by a registered login in user I store the user ID of that signed in user whenever an update is made in the database. However the google apps user sign in process is different in this respect that there isn't anything visible as a user ID for me to work with. Any ideas?

    Read the article

  • WCF fault logging and SQL Exception 4060 error.

    - by Bill
    I have been attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign) for several days. The example app utilizes Juval's ServiceModelEx library which logs faults/errors to a "WCFLogbook" SQL database. Unfortunately, when the sample app faults, I get the following error: SQL Exception 4060: "Cannot open database \"WCFLogbook\" requested by the login. The login failed.\r\nLogin failed for user 'Bill-PC\Bill'." I confirmed that the SQL WCFLogbook database has been created and have granted all of the appropriate permissions for my (Bill-PC\Bill) access to the database. Additionally port 8006 and port 1433 have been opened in the Firewall. TCP/IP has been enabled and "Allow remote connections to this server" has been checked. I am using the following endpoint within the App.Config file: <client> <endpoint name="LogbookTCP" address="net.tcp://Bill-PC:8006/LogbookManager" binding="netTcpBinding" contract="ILogbookManager" /> </client> Unfortunately SQL is a 'world' that I hadn't needed to venture into before now and I am terribly frustrated with my lack of success. Would anyone have any other suggestions on how to get this working? Have I missed anything?

    Read the article

  • Memory management in objective-c

    - by prathumca
    I have this code in one of my classes: - (void) processArray { NSMutableArray* array = [self getArray]; . . . [array release]; array = nil; } - (NSMutableArray*) getArray { //NO 1: NSMutableArray* array = [[NSMutableArray alloc]init]; //NO 2: NSMutableArray* array = [NSMutableArray array]; . . . return array; } NO 1: I create an array and return it. In the processArray method I release it. NO 2: I get an array by simply calling array. As I'm not owner of this, I don't need to release it in the processArray method. Which is the best alternative, NO 1 or NO 2? Or is there a better solution for this?

    Read the article

  • Objective-C (iPhone) Memory Management

    - by Steven
    I'm sorry to ask such a simple question, but it's a specific question I've not been able to find an answer for. I'm not a native objective-c programmer, so I apologise if I use any C# terms! If I define an object in test.h @interface test : something { NSString *_testString; } Then initialise it in test.m -(id)init { _testString = [[NSString alloc] initWithString:@"hello"]; } Then I understand that I would release it in dealloc, as every init should have a release -(void)dealloc { [_testString release]; } However, what I need clarification on is what happens if in init, I use one of the shortcut methods for object creation, do I still release it in dealloc? Doesn't this break the "one release for one init" rule? e.g. -(id)init { _testString = [NSString stringWithString:@"hello"]; } Thanks for your helps, and if this has been answered somewhere else, I apologise!! Steven

    Read the article

  • What Project Management Software should I use?

    - by Vecdid
    I am looking for either an MS tool like project or an open source equivalent. Yes I could google it, but I am looking for some insight from some people whp handle the end of the software I would as a programmer. The tool has to run using IIS as the webserver. What are some of the best features of your suggestion?

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >