Search Results

Search found 14482 results on 580 pages for 'ssrs 2008'.

Page 528/580 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • nServiceBus - Not all commands being received by handler

    - by SimonB
    In a test project based of the nServiceBus pub/sub sample, I've replace the bus.publish with bus.send in the server. The server sends 50 messages with a 1sec wait after each 5 (ie 10 burst of 5 messages). The client does not get all the messages. The soln has 3 projects - Server, Client, and common messages. The server and client are hosted via the nServiceBus generic host. Only a single bus is defined. Both client and server are configured to use StructureMap builder and BinarySerialisation. Server Endpoint: public class EndPointConfig : AsA_Publisher, IConfigureThisEndpoint, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .StructureMapBuilder() .BinarySerializer(); } } Server Code : for (var nextId = 1; nextId <= 50; nextId++) { Console.WriteLine("Sending {0}", nextId); IDataMsg msg = new DataMsg { Id = nextId, Body = string.Format("Batch Msg #{0}", nextId) }; _bus.SendLocal(msg); Console.WriteLine(" ...sent {0}", nextId); if ((nextId % 5) == 0) Thread.Sleep(1000); } Client Endpoint: public class EndPointConfig : AsA_Client, IConfigureThisEndpoint, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .StructureMapBuilder() .BinarySerializer(); } } Client Clode: public class DataMsgHandler : IMessageHandler<IDataMsg> { public void Handle(IDataMsg msg) { Console.WriteLine("DataMsgHandler.Handle({0}, {1}) - ({2})", msg.Id, msg.Body, Thread.CurrentThread.ManagedThreadId); } } Client and Server App.Config: <MsmqTransportConfig InputQueue="nsbt02a" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress=""> <MessageEndpointMappings> <add Messages="Test02.Messages" Endpoint="nsbt02a" /> </MessageEndpointMappings> </UnicastBusConfig> All run via VisualStudio 2008. All 50 messages are sent - but after the 1st or 2nd batch. only 1 msg per batch is sent? Any ideas? I'm assuming config or misuse but ....?

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Object reference not set to an instance of an object.

    - by Lambo
    I have the following code - private static void convert() { string csv = File.ReadAllText("test.csv"); string year = "2008"; XDocument doc = ConvertCsvToXML(csv, new[] { "," }); doc.Save("update.xml"); XmlTextReader reader = new XmlTextReader("update.xml"); XmlDocument testDoc = new XmlDocument(); testDoc.Load(@"update.xml"); XDocument turnip = XDocument.Load("update.xml"); webservice.singleSummary[] test = new webservice.singleSummary[1]; webservice.FinanceFeed CallWebService = new webservice.FinanceFeed(); foreach(XElement el in turnip.Descendants("row")) { test[0].account = el.Descendants("var").Where(x => (string)x.Attribute("name") == "account").SingleOrDefault().Attribute("value").Value; test[0].actual = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "actual").SingleOrDefault().Attribute("value").Value); test[0].commitment = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "commitment").SingleOrDefault().Attribute("value").Value); test[0].costCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "costCentre").SingleOrDefault().Attribute("value").Value; test[0].internalCostCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "internalCostCentre").SingleOrDefault().Attribute("value").Value; MessageBox.Show(test[0].account, "Account"); MessageBox.Show(System.Convert.ToString(test[0].actual), "Actual"); MessageBox.Show(System.Convert.ToString(test[0].commitment), "Commitment"); MessageBox.Show(test[0].costCentre, "Cost Centre"); MessageBox.Show(test[0].internalCostCentre, "Internal Cost Centre"); CallWebService.updateFeedStatus(test, year); } It is coming up with the error of - NullReferenceException was unhandled, saying that the object reference not set to an instance of an object. The error occurs on the first line test[0].account. How can I get past this?

    Read the article

  • About conversion of simplexmlobject to array.

    - by Rishi2686
    Hi guys, I tried the way you told but really messing up on some points. I am getting array fields but when it comes to children nodes, i go off the track. here giving single user's simplexml object only : SimpleXMLElement Object ( [users] = SimpleXMLElement Object ( [@attributes] = Array ( [type] = array ) [user] = Array ( [0] = SimpleXMLElement Object ( [id] = 1011 [name] = saroj [location] = SimpleXMLElement Object ( ) [description] = SimpleXMLElement Object ( ) [profile_image_url] = http://a5.example.com/profile_images/img_normal.jpg [url] = SimpleXMLElement Object ( ) [created_at] = Fri Apr 16 17:04:13 +0000 2010 [status] = SimpleXMLElement Object ( [created_at] = Wed May 26 02:56:03 +0000 2008 [id] = 1473 [text] = hi enjoying here! [in_reply_to_user_id] = SimpleXMLElement Object ( ) ) ) To get this into array I am writing code as below : $users = simplexml_load_string($xml); $arr2 = array(); foreach($users->users->user as $user) { $arr1 = array(); foreach($user as $k=>$v) { $arr1[$k] = (string) $v; } $arr2[] = $arr1; } I am getting all the fields as expected from arr2() instead of the status field which contains an array (imp: id & created_at fields repeated), it gives just key as status. While trying differently i get some of the fields of status array too. Please help me on this, i am really trying it from last two days. Help will be greatly appreciated.

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • using a 64-bit compiler in microsoft visual c++

    - by Ben
    this question is essentially identical to an earlier question i had that didn't receive any answers. hopefully someone can help me out this time. i am trying to compile a vc++ project as 64 bit using visual c++ express 2010. i know that the 64 bit compiler does not come with the default installation of vc++ express so i installed windows sdk for windows 7 as specified here (http://msdn.microsoft.com/en-us/library/9yb4317s.aspx) which includes the 64 bit compiler as i understand. however, there is still no 64 bit option in the configuration manager for vc++. after some searching i found and completed this tutorial (http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/) as well as the various links at the bottom of this page. despite all my efforts, i still cannot get the 64 bit compiler to show in vc++ (i.e. the 64 bit compiler won't show under "active solutions platform" in the configuration manager). if anyone has any experience/tips with getting this to work i would really appreciate it. fyi - i am running windows 7(x64).

    Read the article

  • Unhandled Exception error message

    - by Joshua Green
    Does anyone know why including a term such as: t = PL_new_term_ref(); would cause an Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. (Visual Studio 2008) I have a header file: class UserTaskProlog : public ArAction { public: UserTaskProlog( const char* name = " sth " ); ~UserTaskProlog( ); AREXPORT virtual ArActionDesired *fire( ArActionDesired currentDesired ); private: term_t t; }; and a cpp file: UserTaskProlog::UserTaskProlog( const char* name ) : ArAction( name, " sth " ) { char** argv; argv[ 0 ] = "libpl.dll"; PL_initialise( 1, argv ); PlCall( "consult( 'myProg.pl' )" ); } UserTaskProlog::~UserTaskProlog( ) { } ArActionDesired *UserTaskProlog::fire( ArActionDesired currentDesired ) { cout << " something " << endl; t = PL_new_term_ref( ); } Without t=PL_new_term_ref() everything works fine, but when I start adding my Prolog code (declarations first, such as t=PL_new_term_ref), I get this Access Violation error message. I'd appreciate any help. Thanks,

    Read the article

  • VB.NET: Dialog exits when enter pressed?

    - by Camilo Martin
    Hi all; My problem seems to be quite simple, but it's not working the intuitive way. I'm designing a Windows Forms Application, and there is a dialog that should NOT exit when the enter key is pressed, instead it has to validate data first, in case enter was pressed after changing the text of a ComboBox. I've tried by telling it what to do on KeyPress event of the ComboBox if e is the Enter key: Private Sub ComboBoxSizeChoose_KeyPress(ByVal sender As System.Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles ComboBoxSizeChoose.KeyPress If e.KeyChar = Convert.ToChar(Keys.Enter) Then Try TamanhoDaNovaFonte = Single.Parse(ComboBoxSizeChoose.Text) Catch ex As Exception Dim Dialogo2 As New Dialog2 Dialog2.ShowDialog() ComboBoxSizeChoose.Text = TamanhoDaNovaFonte End Try End If End Sub But no success so far. When the Enter key is pressed, even with the ComboBox on focus, the whole dialog is closed, returning to the previous form. The validation is NOT done at all, and it has to be done before exiting. In fact, I don't even want to exit on the form's enter KeyPress, the only purpose of the enter key on the whole dialog is to validate the ComboBox (but only when in focus, for the sake of an intuitive UI). I've also tried appending the validation to the KeyPress event of the whole dialog's form, if the key is Enter. NO SUCCESS! It's like my code wasn't there at all. What should I do? (Visual Studio 2008, VB.NET)

    Read the article

  • mercurial fails with a file named ---.config - any way around this?

    - by Travis Laborde
    We are just beginning to learn and evaluate Mercurial, due to an increasing number of nightmare merges, and various other problems we've had with SVN lately. A client wants us to pull down a live copy of their site, do some SEO work on it, and push it back to them. They have no source control at all. I figure this is a great project to work on with Mercurial. Instead of putting it into our SVN and exporting when we are done, we'll use Mercurial... But right away it seems I have some problem :) They have a file called "---.config" (without quotes) which seems to cause our Mercurial to barf. It just can't commit that file. I've created the repo and committed everything else, but I just can't get this one file committed. We are running on Windows 2008 x64 with TortoiseHG 1.0. I suppose I could ignore the file since it is unlikely we'll need to work with it, but still - I'd like to learn how to use Mercurial a bit better. Is there a way around this?

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • IIS Flash Remoting Request Limit introduced with Coldfusion 9

    - by ciaranarcher
    Hi all We've just been our Coldfusion servers from Enterprise CF 8.01 to CF 9. They are running Win 2008. We ran into trouble on those servers that provide the Flash remoting back-end for a Flex application we provide. Once the CF 9 upgrade was complete we noticed that during busy times when many Flex clients were connecting, we appeared to have a hard limit of 25 Flash Remoting Requests running, despite having much higher limits (in fact 150) set in CF Admin. Initially we thought that this was an issue with the fact that Blaze DS was now bundled with CF 9 (rather than a separate install) so we decided to roll-back the CF 9 installation. This, unfortunately, didn't work and we were still stuck with out hard limit of 25 Flash Remoting requests. Then looking at IIS we noticed that the CF9 ISAPI filter was still installed (after we had ran the Web Service Configuration part of the install). That was removed and the CF 8 one was re-run and all of a sudden the Flash Remoting hard limit disappeared. So it seems that it might have had something to do with the wsconfig of CF 9 (C:\ColdFusion9\runtime\bin\wsconfig.exe) Has anyone else had this problem, or does anybody know of where these hard limits are configured in IIS? Any and all help appreciated!

    Read the article

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • c++ callback syntax in a class

    - by Mr Bell
    I am trying to figure out the syntax to register a callback with this 3rd party software. I think it is probably a basic question, I just am not too familiar with c++. They have a method for registering a callback function so their code can call a function in my code when an event happens. They provided a working example that registers the callback from the main file, but I want to know how to do it when working inside a class Their method signature: smHTRegisterHeadPoseCallback(smEngineHandle engine_handle, void *user_data, smHTHeadPoseCallback callback_fun); Working example from the main file: void STDCALL receiveHeadPose(void *,smEngineHeadPoseData head_pose, smCameraVideoFrame video_frame) { ... } void main() { ... smHTRegisterHeadPoseCallback(engine_handle,0,receiveHeadPose) ... } But I want to use this from my class MyClass.h class FaceEngine { public: void STDCALL receiveFaceData(void *, smEngineFaceData face_data, smCameraVideoFrame video_frame); ... MyClass.cpp void FaceEngine::Start(void) { rc = smHTRegisterFaceDataCallback(hFaceAPIEngine,0,&FaceEngine::receiveFaceData); ... Results in this compiler error: Error 1 error C2664: 'smHTRegisterFaceDataCallback' : cannot convert parameter 3 from 'void (__stdcall FaceEngine::* )(void *,smEngineFaceData,smCameraVideoFrame)' to 'smHTFaceDataCallback' d:\stuff\programming\visual studio 2008\projects\tut02_vertices\faceengine.cpp 43 Beard If my question isn't clear please let me know how I can clarify.

    Read the article

  • SQLDeveloper using over 100MB of PGA+UGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • Not able to connect to TFS Server from TFS Proxy

    - by GV India
    In our office we have setup TFS for project development. The TFS Server is WIN 2003 server SP2 with VSTFS 2008 and is running fine. Now we need to setup a TFS Proxy server on client site for client to access. Before going for the client setup, I wanted to build and test proxy in our office on a dummy server (will call it Proxy server hereon) by keeping it on a different domain. OS configuration of the Proxy server is the same as TFS server. I have installed and configured TFS proxy on Proxy server to connect to TFS Server. Also we have built trust between the two different domains to enable communication. Now problem is that I am not able to at all connect to TFS server. I am trying to connect from Internet Explorer of proxy server using proxy service account. It gives me error: The page cannot be displayed. HTTP 500 - Internal server error. The page I was browsing was http://tfs:8080/VersionControl/v1.0/ProxyStatistics.asmx. I think I have done all the required steps correctly to configure proxy as described in MSDN and also TFS installation guide. Here Proxy service account is a member of ‘Team Foundation Valid Users’ group. I am able to connect to TFS Server (specifying port) using Telnet from command prompt on proxy server as suggested by few sites. The TFS server web sites have been configured to use Integration Windows Authentication. Event Logs on both the servers are also not giving any error. Overall I’m not able to get it done. Any ideas on what might be the problem???

    Read the article

  • Users Hierarchy Logic

    - by user342944
    Hi guys, I am writing a user security module using SQLServer 2008 so threfore need to design a database accordingly. Formally I had Userinfo table with UserID, Username and ParentID to build a recursion and populated tree to represent hierarchy but now I have following criteria which I need to develop. I have now USERS, ADMINISTRATORS and GROUPS. Each node in the user hierarchy is either a user, administrator or group. User Someone who has login access to my application Administrator A user who may also manage all their child user accounts (and their children etc) This may include creating new users and assigning permissions to those users. There is no limit to the number of administrators in user structure. The higher up in the hierarchy that I go administrators have more child accounts to manage which include other child administrators. Group A user account can be designated as a group. This will be an account which is used to group one or more users together so that they can be manage as a unit. But no one can login to my application using a group account. This is how I want to create structure Super Administrator administrator ------------------------------------------------------------- | | | Manager A Manager B Manager C (adminstrator) (administrator) (administrator) | ----------------------------------------- | | | Employee A Employee B Sales Employees (User) (User) (Group) | ------------------------ | | | Emp C Emp D Emp E (User) (User) (User) Now how to build the table structure to achieve this. Do I need to create Users table alongwith Group table or what? Please guide I would really appreciate.

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • SQL-Server: Is there an equivalent of a trigger for general stored procedure execution

    - by Arj
    Hi All, Hope you can help. Is there a way to reliably detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • HELP ME my dataset xsd content HAS GONE

    - by Mustafa Magdy
    I'm working in an erp project using Visual Studio 2008 Sp1, I've a typed dataset it was containg alot of datatable and alot of table adapter the .Designer.cs file was 8 MB, suddenly when i was trying to openit using visual studio designer the following code comes to me <?xml version="1.0" encoding="utf-8"?> <xs:schema id="erpDataSet" targetNamespace="http://tempuri.org/erpDataSet1.xsd" xmlns:mstns="http://tempuri.org/erpDataSet1.xsd" xmlns="http://tempuri.org/erpDataSet1.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop" attributeFormDefault="qualified" elementFormDefault="qualified"> <xs:annotation> <xs:appinfo source="urn:schemas-microsoft-com:xml-msdatasource"> <DataSource DefaultConnectionIndex="0" FunctionsComponentName="QueriesTableAdapter" Modifier="AutoLayout, AnsiClass, Class, Public" SchemaSerializationMode="IncludeSchema" xmlns="urn:schemas-microsoft-com:xml-msdatasource"> <Connections> <Connection AppSettingsObjectName="Settings" AppSettingsPropertyName="erpConnectionString" IsAppSettingsProperty="true" Modifier="Assembly" Name="erpConnectionString (Settings)" ParameterPrefix="@" PropertyReference="ApplicationSettings.Sbic.Pro My XSD file content has gone, :( :( :( I don't understand why, and how can i recover it. i mad something but i don't know if it is the reason for that or not, My connectionstring was in the settings "app.config" i removed it and add it to the resources of another project that is refernced by the main project. what can i do, pleaze help me.

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • how to change ASP.NET Configuration tool connection string

    - by Zviadi
    Hello, how can I change ASP.NET Configuration tool-s connection string name? (Which connection string will ASP.NET Configuration tool will use) I'm learning ASP.NET and everywhere and in book that I'm reading now theres connection string named: LocalSqlServer. I want to use my local sql server database instead of sql express to store Roles, Membership and other data. I have used aspnet_regsql.exe to create needed data structures in my database. after that I changed my web.config to look like: <connectionStrings> <remove name="LocalSqlServer"/> <add name="LocalSqlServer" connectionString="Server=(LOCAL); Database=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> but when I run ASP.NET Configuration tool it says that: "The connection name 'ApplicationServices' was not found in the applications configuration or the connection string is empty." ASP.NET Configuration tool uses connection string named: ApplicationServices not LocalSqlServer. cause of that I have to modify web.config to: <connectionStrings> <add name="ApplicationServices" connectionString="Server=(LOCAL); Database=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> and everything works fine. I wish to know why the hell my web site uses connection string named: ApplicationServices and all books and online documentations uses LocalSqlServer? and how to change it to LocalSqlServer? I have: Windows 7 Sql Server 2008 R2 Visual Studio 2010 Premium Project type is website

    Read the article

  • Have you had DLL's fail after upgrading to 64 bit server?

    - by quakkels
    Hey All, I'm wondering if anyone else has experienced failed DLL's after upgrading their servers. My company is in the process of upgrading our code and server's after ten years of using classic ASP. We've set up our new server running Windows 2008 and IIS 7. Our classic ASP code and our new asp.net mvc code work pretty well. Our problems started happening when we began moving our old websites to the new server. When trying to load the page on the actual server machine's browser, we initially got a 500 error. If we refreshed the page then some of the page would load but then display an error: Server object error 'ASP 0177 : 800401f3' Server.CreateObject Failed /folder/scriptname.asp, line 24 800401f3 btw: On remote machines we would just get 500 errors. Line 24 is the first executable code in the script: '23 lines of comments set A0SQL_DATA = server.createobject("olddllname.Data") 'the rest of the script That specific line is trying to use a ten year old DLL to create a server object. I don't think the server configuration is a problem because I'm able to create "adodb.recordset" server objects without any problems. Is there an issue when running correctly registered old DLL's on 64 bit systems? Is there a way to get old DLL's working on 64 bit systems?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >