Search Results

Search found 90468 results on 3619 pages for 'server architecture'.

Page 273/3619 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Bi-directional WCF Client-Server Communication

    - by Bill
    I have been working for weeks on creating a client/server to control a music-server application located on the server-side that is controlled by several client apps located across the LAN. I've been successful in getting the client-side to communicate with the Server, sending commands to operate the music-server, and through the use of callbacks, reply to the clients so that all of the client UI's can be appropriately updated. My problem is however, that I unable to figure-out how to broadcast other messages that need to be sent from the server app to the clients. I was hoping to utilize the callback method; however I have not been able to access it from the server side. Do I need to modify or create another contract that provides for communication from the server to the clients? Does the binding require modification? As I mentioned earlier, I have truly been working on this for weeks (which is beginning to feel like 'years'), and hope to get this last piece of the application working. Would someone please steer me in the right direction? Client Side SERVICE REFERENCE: <?xml version="1.0" encoding="utf-8"?> <ServiceReference> <ProxyGenerationParameters ServiceReferenceUri="http://localhost:8001/APService/mex" Name="APGateway" NotifyPropertyChange="True" UseObservableCollection="False"> </ProxyGenerationParameters> <EndPoints> <EndPoint Address="net.tcp://localhost:8000/APService/service" BindingConfiguration="TcpBinding" Contract="APClient.APGateway.APUserService" > </EndPoint> <EndPoint Address="http://localhost:8001/APService/service" BindingConfiguration="HttpBinding" Contract="APClient.APGateway.APUserService" > </EndPoint> </EndPoints> </ServiceReference> Client Side AP CONFIG <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="APClient.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8000/APService/service" binding="netTcpBinding" contract="APClient.APGateway.APUserService" name="TcpBinding" /> <endpoint address="http://localhost:8001/APService/service" binding="wsDualHttpBinding" contract="APClient.APGateway.APUserService" name="HttpBinding" /> </client> </system.serviceModel> <applicationSettings> <APClient.Properties.Settings> <setting name="pathToDatabase" serializeAs="String"> <value>C:\Users\Bill\Documents\APData\</value> </setting> </APClient.Properties.Settings> </applicationSettings> Server Side AP.CONFIG <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="MetadataBehavior"> <serviceMetadata httpGetEnabled="true" httpGetUrl="http://localhost:8001/APService/mex" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MetadataBehavior" name="APService.APService"> <endpoint address="service" binding="netTcpBinding" name="TcpBinding" contract="APService.IAPServiceInventory" /> <endpoint address="service" binding="wsDualHttpBinding" name="HttpBinding" contract="APService.IAPServiceInventory" /> <endpoint address="mex" binding="mexHttpBinding" name="MexBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8000/APService/" /> <add baseAddress="http://localhost:8001/APService/" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> Server Side APSERVICE.CS namespace APService { [ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Single,InstanceContextMode=InstanceContextMode.PerCall)] public class APService : IAPServiceInventory { private static List<IClientCallback> _callbackList = new List<IClientCallback>(); private static int _beerInventory = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["InitialBeerInventory"]); public APService() {} public int SubscribeToServer(string guestName) { IClientCallback guest = OperationContext.Current.GetCallbackChannel<IClientCallback>(); if(!_callbackList.Contains(guest)) { _callbackList.Add(guest); } else { Console.WriteLine(guest + " is already logged onto the Server."); } _callbackList.ForEach(delegate(IClientCallback callback) { callback.NotifyGuestJoinedParty(guestName); }); } public void UpdateClients(string guestName,string UpdateInfo) { _callbackList.ForEach(delegate(IClientCallback callback) { callback.NotifyUpdateClients(guestName,UpdateInfo); }); } public void SendRequestToServer(string guestName, string request) { _callbackList.ForEach(delegate(IClientCallback callback) { callback.NotifyRequestMadeToServer(guestName,request); }); if(request == "Play") { APControl.Play(); } else if(request == "Stop") { APControl.Stop(); } else if(request == "Pause") { APControl.PlayPause(); } else if(request == "Next Track") { APControl.NextTrack(); } else if(request == "Previous Track") { APControl.PreviousTrack(); } else if(request == "Mute") { APControl.Mute(); } else if(request == "Volume Up") { APControl.VolumeUp(5); } else if(request == "Volume Down") { APControl.VolumeDown(5); } } public void CancelServerSubscription(string guestName) { IClientCallback guest = OperationContext.Current.GetCallbackChannel<IClientCallback>(); if(_callbackList.Contains(guest)) { _callbackList.Remove(guest); } _callbackList.ForEach(delegate(IClientCallback callback) { callback.NotifyGuestLeftParty(guestName); }); } } Server Side IAPSERVICE.CS namespace APService { [ServiceContract(Name="APUserService",Namespace="http://AP.com/WCFClientServer/",SessionMode=SessionMode.Required, CallbackContract=typeof(IClientCallback))] public interface IAPServiceInventory { [OperationContract()] int SubscribeToServer(string guestName); [OperationContract(IsOneWay=true)] void SendRequestToServer(string guestName,string request); [OperationContract(IsOneWay=true)] void UpdateClients(string guestName,string UpdateInfo); [OperationContract(IsOneWay=true)] void CancelServerSubscription(string guestName); } } Server side - IAPServiceCallback.cs namespace APService { public interface IClientCallback { [OperationContract(IsOneWay=true)] void NotifyGuestJoinedParty(string guestName); [OperationContract(IsOneWay=true)] void NotifyUpdateClients(string guestName,string UpdateInfo); [OperationContract(IsOneWay=true)] void NotifyRequestMadeToServer(string guestName,string request); [OperationContract(IsOneWay=true)] void NotifyGuestLeftParty(string guestName); }

    Read the article

  • iOS chat application design, sending/relaying the message over to the end user

    - by AyBayBay
    I have a design question. Let us say you were tasked with building a chat application, specifically for iOS (iOS Chat Application). For simplicity let us say you can only chat with one person at a time (no group chat functionality). How then can you achieve sending a message directly to an end user from phone A to phone B? Obviously there is a web service layer with some API calls. One of the API calls available will be startChat(). After starting a chat, when you send a message, you make another async call, let us call it sendMessage() and pass in a string with your message. Once it goes to the web service layer, the message gets stored in a database. Here is where I am currently stuck. After the message gets sent to the web service layer, how do we then achieve sending/relaying the message over to the end user? Should the web server send out a message to the end user and notify them, or should each client call a receiveMessage() method periodically, and if the server side has some info for them it can then respond with that info? Finally, how can we handle the case in which the user you are trying to send a message to is offline? How can we make sure the end user gets the packet when he moves back to an area with signal?

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Server Admin is not allowing me to configure DNS

    - by Clinton Blackmore
    We have a Mac OS X 10.5.8 Server running DNS (and a few other services). When I connect to it (using Server Admin 10.5.3 [which comes from the Server Admin 10.5.7 tools]), and click to look at the DNS settings, all appears normal -- it shows many reverse entries and two top-level domains. However, when I select one of our domains and open the disclosure triangle, the list is empty! [There should be over a dozen entries, and the reverse entries do show up.] If I then tell it I want to add, say, an A Record to the domain, almost everything disappears -- and I am left with a list showing our two domains, one with a disclosure triangle underneath it showing a single entry, and one reverse entry to correspond to the new A record. named appears to be working fine. DNS names resolve. It appears to simply be that Server Admin is having problems with the data on the computer. No one here would have manually created a DNS entry. Now, while I think I've backed up the DNS (I backed up /var/named/, /etc/named.conf, and /etc/dns/, as mentioned here), I'm really not sure if just replacing the files would restore the DNS settings we have if things go south. I am contemplating going to settings and changing the log level from "Information" to "Debug", but 1) I am just a little concerned that it might write a bad configuration to the disk, and 2) I think it would only affect named and not Server Admin, and, so far as I can tell, named is not having a problem. (Nothing looks strange in /Library/Logs/named.log when I open it via Console/Terminal. Oddly, though, when I click on the 'log' button for DNS in Server Admin, I see no text at all, just a fully white window. When I look at one of our secondary DNS servers, I am able to see the log file through Server Admin.) This entry appears in the system log when I run Server Admin on the server: Jun 17 09:02:08 od1 Server Admin[3892]: Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription It seems to occur after I've looked at DNS, look at another service, and then click back on DNS. Think that the most likely cause is a corrupt configuration file, I glanced through all the files that I backed up, and none of them is obviously gobbledygook. Here are some oddities I find when running Server Admin from a remote computer to manage the DNS. When I click to see the log file for DNS, the server starts writing messages like these to its system.log: Jun 17 09:59:04 od1 kernel[0]: Limiting open port RST response from 252 to 250 packets per second Jun 17 09:59:06 od1 kernel[0]: Limiting open port RST response from 258 to 250 packets per second This stops when I click on a different service. The inderterminate progress indicator (the spinning wheel that appears beside the "Revert" and "Save" buttons in the bottom-right corner of Server Admin) looks really strange. As far as I can tell, instead of just spinning and waiting, it is being told to start spinning repeatedly, resulting in a jerky animation. Here are some of the messages being logged on the computer running Server Admin: At startup: *** ERROR: -[GRAxes computeLayout]:1124 - plotRect height = 0.000000 <= 0.0 *** *** ERROR: -[GRChartView computeLayout]:1194 - Layout for overlay axes (0x18758f50) failed. *** (These messages don't concern me too much as they go away for a while if you delete ~/Library/Preferences/com.apple.ServerAdmin.plist). At shutdown: 2010-06-17 10:02:17.202 Server Admin[7770:10b] *** -[GroupTextField windowDidResignKey:]: unrecognized selector sent to instance 0x16e12490 More concerning are these messages: 2010-06-17 09:59:47.269 Server Admin[7770:10b] Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription Server Admin(7770,0xb0453000) malloc: *** error for object 0x1c115390: double free *** set a breakpoint in malloc_error_break to debug 2010-06-17 10:01:00.795 Server Admin[7770:10b] *** -[ServiceEntry sessionHost]: unrecognized selector sent to instance 0x2af500 Any thoughts on: what the problem is how I can troubleshoot it or how to fix it? If I do need to wipe out DNS and restart, is there a good way to do this?

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Give Markup support for Custom server control with public PlaceHolders properties

    - by ravinsp
    I have a custom server control with two public PlaceHolder properties exposed to outside. I can use this control in a page like this: <cc1:MyControl ID="MyControl1" runat="server"> <TitleTemplate> Title text and anything else </TitleTemplate> <ContentTemplate> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:Button ID="Button1" runat="server" Text="Button" /> </ContentTemplate> </cc1:MyControl> TitleTemplate and ContentTemplate are properties of type asp.net PlaceHolder class. Everything works fine. The control gets any content given to these custom properties and produces a custom HTML output around them. If I want a Button1_Click event handler, I can attach the event handler in Page_Load like the following code. And it works. protected void Page_Load(object sender, EventArgs e) { Button1.Click += new EventHandler(Button1_Click); } void Button1_Click(object sender, EventArgs e) { TextBox1.Text = "Button1 clicked"; } But if try to attach the click event handler in aspx markup I get an error when running the application "Compiler Error Message: CS0117: 'ASP.default_aspx' does not contain a definition for 'Button1_Click' <asp:Button ID="Button1" runat="server" Text="Button" OnClick="Button1_Click" /> AutoEventWireup is set to "true" in the page markup. This happens only for child controls inside my custom control. I can programatically access child control correctly. Only problem is with event handler assignment from Markup. When I select the child Button in markup, the properties window only detects it as a < BUTTON. Not System.Web.UI.Controls.Button. It also doesn't display the "Events" tab. How can I give markup support for this scenario? Here's code for MyControl class if needed. And remember, I'm not using any ITemplate types for this. The custom properties I provide are of type "PlaceHolder". [ToolboxData("<{0}:MyControl runat=server>" + "<TitleTemplate></TitleTemplate>" + "<ContentTemplate></ContentTemplate>" + "</{0}:MyControl>")] public class MyControl : WebControl { PlaceHolder contentTemplate, titleTemplate; public MyControl() { contentTemplate = new PlaceHolder(); titleTemplate = new PlaceHolder(); Controls.Add(contentTemplate); Controls.Add(titleTemplate); } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder TitleTemplate { get { return titleTemplate; } } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder ContentTemplate { get { return contentTemplate; } } protected override void RenderContents(HtmlTextWriter output) { output.Write("<div>"); output.Write("<div class=\"title\">"); titleTemplate.RenderControl(output); output.Write("</div>"); output.Write("<div class=\"content\">"); contentTemplate.RenderControl(output); output.Write("</div>"); output.Write("</div>"); } }

    Read the article

  • Tales from the Trenches – Building a Real-World Silverlight Line of Business Application

    - by dwahlin
    There's rarely a boring day working in the world of software development. Part of the fun associated with being a developer is that change is guaranteed and the more you learn about a particular technology the more you realize there's always a different or better way to perform a task. I've had the opportunity to work on several different real-world Silverlight Line of Business (LOB) applications over the past few years and wanted to put together a list of some of the key things I've learned as well as key problems I've encountered and resolved. There are several different topics I could cover related to "lessons learned" (some of them were more painful than others) but I'll keep it to 5 items for this post and cover additional lessons learned in the future. The topics discussed were put together for a TechEd talk: Pick a Pattern and Stick To It Data Binding and Nested Controls Notify Users of Successes (and failures) Get an Agent – A Service Agent Extend Existing Controls The first topic covered relates to architecture best practices and how the MVVM pattern can save you time in the long run. When I was first introduced to MVVM I thought it was a lot of work for very little payoff. I've since learned (the hard way in some cases) that my initial impressions were dead wrong and that my criticisms of the pattern were generally caused by doing things the wrong way. In addition to MVVM pros the slides and sample app below also jump into data binding tricks in nested control scenarios and discuss how animations and media can be used to enhance LOB applications in subtle ways. Finally, a discussion of creating a re-usable service agent to interact with backend services is discussed as well as how existing controls make good candidates for customization. I tried to keep the samples simple while still covering the topics as much as possible so if you’re new to Silverlight you should definitely be able to follow along with a little study and practice. I’d recommend starting with the SilverlightDemos.View project, moving to the SilverlightDemos.ViewModels project and then going to the SilverlightDemos.ServiceAgents project. All of the backend “Model” code can be found in the SilverlightDemos.Web project. Custom controls used in the app can be found in the SivlerlightDemos.Controls project.   Sample Code and Slides

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Should I be using a JavaScript SPA designed when security is important

    - by ryanzec
    I asked something kind of similar on stackoverflow with a particular piece of code however I want to try to ask this in a broader sense. So I have this web application that I have started to write in backbone using a Single Page Architecture (SPA) however I am starting to second guess myself because of security. Now we are not storing and sending credit card information or anything like that through this web application but we are storing sensitive information that people are uploading to us and will have the ability to re-download too. The obviously security concern that I have with JavaScript is that you can't trust anything that comes from JavaScript however in a Backbone SPA application, everything is being sent through JavaScript. There are two security features that I will have to build in JavaScript; permissions and authentication. The authentication piece is just me override the Backbone.Router.prototype.navigate method to check the fragment it is trying to load and if the JavaScript application.session.loggedIn is not set to true (and they are not viewing a none authenticated page), they are redirected to the login page automatically. The user could easily modify application.session.loggedIn to equal true (or modify Backbone.Router.prototype.navigate method) but then they would also have to not so easily dynamically embedded a link into the page (or modify a current one) that has the proper classes, data-* attributes, and href values to then load a page that should only be loaded when they user has logged in (and has the permissions). So I have an acl object that deals with the permissions stuff. All someone would have to do to view pages or parts of pages they should not be able to is to call acl.addPermission(resource, permission) with the proper permissions or modify the acl.hasPermission() to always return true and then navigate away and then back to the page. Now certain things is EMCAScript 5 like Object.seal() or Object.freeze() would help with some of this however we have to support IE 8 which does not support those pieces of functionality. Now the REST API also performs security checks on every request so technically even if they are able to see parts of the interface that they should not be able to, they still should not be able to actually affect any data. The main benefits for me in developing a JavaScript SPA application is that the application is a lot more responsive since it is only transferring the minimum amount of JSON data for the requested action and performing the minimum amount of work too. There are also other things that I think are beneficial like you are going to have to develop an API for the data (which is good if you want expand your application to different platforms/technologies) or their is more of a separation between front-end and back-end however if security is a concern, it is really wise to go down the road of a JavaScript SPA application for the front-end?

    Read the article

  • Characteristics of a Web service that promote reusability and change

    Characteristics of a Web service that promote reusability and change:  Standardized Data Exchange Formats (XML, JSON) Standardized communication protocols (Soap, Rest) Promotes Loosely Coupled Systems  Standardized Data Exchange Formats (XML, JSON) XML W3.org defines Extensible Markup Language (XML) as a simplistic text format derived from SGML. XML was designed to solve challenges found in large-scale electronic publishing. In addition,  XML is playing an important role in the exchange of data primarily focusing on data exchange on the web. JSON JavaScript Object Notation (JSON) is a human-readable text-based standard designed for data interchange. This format is used for serializing and transmitting data over a network connection in a structured format. The primary use of JSON is to transmit data between a server and web application. JSON is an alternative to XML. Standardized communication protocols (Soap, Rest) Soap W3Scools.com defines SOAP as a simple XML-based protocol. This protocol lets applications exchange data over HTTP.  SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages. Rest In 2007, Stefan Tilkov defines Representational State Transfer (REST) as a set of principles that outlines how Web standards are supposed to be used.  Using REST in an application will ensure that it exploits the Web’s architecture to its benefit. Promotes Loosely Coupled Systems “Loose coupling as an approach to interconnecting the components in a system or network so that those components, also called elements, depend on each other to the least extent practicable. Coupling refers to the degree of direct knowledge that one element has of another.” (TechTarget.com, 2007) “Loosely coupled system can be easily broken down into definable elements. The extent of coupling in a system can be measured by mapping the maximum number of element changes that can occur without adverse effects. Examples of such changes include adding elements, removing elements, renaming elements, reconfiguring elements, modifying internal element characteristics and rearranging the way in which elements are interconnected.” (TechTarget.com, 2007) References: W3C. (2011). Extensible Markup Language (XML). Retrieved from W3.org: http://www.w3.org/XML/ W3Scools.com. (2011). SOAP Introduction. Retrieved from W3Scools.com: http://www.w3schools.com/soap/soap_intro.asp Tilkov, Stefan. (2007). A Brief Introduction to REST. Retrieved from Infoq.com: http://www.infoq.com/articles/rest-introduction TechTarget.com. (2011). loose coupling. Retrieved from TechTarget.com: http://searchnetworking.techtarget.com/definition/loose-coupling

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • Visual Studion 2008 App_Data defaults

    - by Dennis Keith
    Is it possible to use the App_Data folder in conjunction with SQL Server 2005? When I try it specifies Express even though I have changed the ToolsOptionsDatabaseData Connections to the correct server. I have downloaded SQLEXPR32_x86_ENU.exe Version 10.0.1600.22 file locally and have gone through 7 installs and deinstalls with a variety of different errors. I pretty much given up on Express and would like to find a workaround if it exists. Thanks Dennis Keith

    Read the article

  • Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid

    - by viatropos
    I've been struggling with this for a few days now, haven't pinpointed the problem. I am trying to get OpenID to work in Rails 2.3 and Rails 3, using ruby-openid rack-openid open_id_authentication I am logging in using my viatropos.myopenid.com account, but it consistently returns this error: Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid What could that be from, it's not a very descriptive error... Does it have to do with something ruby-specific, or is this entirely on the OpenID protocol side of things? More specifically, I am using Authlogic and ActiveRecord, so could this be a problem with my User or UserSession models somehow? Or is it more to do with the header or request? In ruby response I'm getting (from puts inside ruby-openid) is: #<OpenID::Consumer::FailureResponse:0x25e282c @reference=nil, @endpoint=#<OpenID::OpenIDServiceEndpoint:0x2601984 @local_id="http://viatropos.myopenid.com/", @display_identifier=nil, @type_uris=["http://specs.openid.net/auth/2.0/signon", "http://openid.net/sreg/1.0", "http://openid.net/extensions/sreg/1.1", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant", "http://openid.net/srv/ax/1.0"], @used_yadis=true, @server_url="http://www.myopenid.com/server", @canonical_id=nil, @claimed_id="http://viatropos.myopenid.com/">, @message="Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid", @contact=nil> Any tips would be greatly appreciated. Thanks

    Read the article

  • SQL Server 2008 Filestream Impersonation Access Denied error

    - by Adi
    I've been trying to upload a file to the database using SQL SERVER 2008 Filestream and Impersonation technique to save the file in the file system, but i keep getting Access Denied error; even though i've set the permissions for the impersonating user to the Filestream folder(C:\SQLFILESTREAM\Dev_DB). when i debugged the code, i found the server return a unc path(\Server_Name\MSSQLSERVER\v1\Dev_LMDB\dbo\FileData\File_Data\13C39AB1-8B91-4F5A-81A1-940B58504C17), which was not accessible through windows explorer. I've my web application hosted on local maching(Windows 7). SQL Server is located on a remote server(Windows Server 2008 R2). Sql authentication was used to call the stored procedure. Following is the code i've used to do the above operations. SqlCommand sqlCmd = new SqlCommand("AddFile"); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.Parameters.Add("@File_Name", SqlDbType.VarChar, 512).Value = filename; sqlCmd.Parameters.Add("@File_Type", SqlDbType.VarChar, 5).Value = Path.GetExtension(filename); sqlCmd.Parameters.Add("@Username", SqlDbType.VarChar, 20).Value = username; sqlCmd.Parameters.Add("@Output_File_Path", SqlDbType.VarChar, -1).Direction = ParameterDirection.Output; DAManager PDAM = new DAManager(DAManager.getConnectionString()); using (SqlConnection connection = (SqlConnection)PDAM.CreateConnection()) { connection.Open(); SqlTransaction transaction = connection.BeginTransaction(); WindowsImpersonationContext wImpersonationCtx; NetworkSecurity ns = null; try { PDAM.ExecuteNonQuery(sqlCmd, transaction); string filepath = sqlCmd.Parameters["@Output_File_Path"].Value.ToString(); sqlCmd = new SqlCommand("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"); sqlCmd.CommandType = CommandType.Text; byte[] Context = (byte[])PDAM.ExecuteScalar(sqlCmd, transaction); byte[] buffer = new byte[4096]; int bytedRead; ns = new NetworkSecurity(); wImpersonationCtx = ns.ImpersonateUser(IMP_Domain, IMP_Username, IMP_Password, LogonType.LOGON32_LOGON_INTERACTIVE, LogonProvider.LOGON32_PROVIDER_DEFAULT); SqlFileStream sfs = new SqlFileStream(filepath, Context, System.IO.FileAccess.Write); while ((bytedRead = inFS.Read(buffer, 0, buffer.Length)) != 0) { sfs.Write(buffer, 0, bytedRead); } sfs.Close(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } finally { sqlCmd.Dispose(); connection.Close(); connection.Dispose(); ns.undoImpersonation(); wImpersonationCtx = null; ns = null; } } Can someone help me with this issue. Reference Exception: Type : System.ComponentModel.Win32Exception, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Access is denied Source : System.Data Help link : NativeErrorCode : 5 ErrorCode : -2147467259 Data : System.Collections.ListDictionaryInternal TargetSite : Void OpenSqlFileStream(System.String, Byte[], System.IO.FileAccess, System.IO.FileOptions, Int64) Stack Trace : at System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) Thanks

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >