Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 174/663 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Looking for Application Framework Features Lists, Comparisons and Guides [closed]

    - by Blah McBlah
    I am looking for lists of the things that application frameworks can do and for websites that have matrices, marketing content, blog articles and whatnot for comparing application frameworks to each other or just selling a framework. I'm talking generally, so regardless of coded language or operating system or client device. I want it all. I've found a few online, and would appreciate whatever sources I can glean from this site too.

    Read the article

  • .NET developer needs FoxPro advice

    - by katit
    We have a prospect with FoxPro 2.6 (whatever it means) system. Our product integrates with other systems by the means of triggers (usually). We would place couple of triggers on X system and then just pull collected data for our use. This way there is no need to customize customers product and it works great(almost real time - we poll for changes every 30 seconds). Question: Can I put triggers on FoxPro 2.6? Can access FoxPro from .NET? Any catches/caveats?

    Read the article

  • how to do loop for array which have different data for each array

    - by Suriani Salleh
    i have this file XML file.. I need to convert it form XMl to MYSQL. if it have only one array than i know how to do it.. now my question how to extract this two array Each array will have different value of data..for example for first array, pmIntervalTxEthMaxUtilization data : 0,74,0,0,48 and for second array pmIntervalRxPowerLevel data: -79,-68,-52 , pmIntervalTxPowerLevel data: 13,11,-55 . can some one help to guide how write php code to extract this xml file to MY SQL <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxUndersizedFrames</mt> [ this is first array] <mt>pmIntervalRxUnicastFrames</mt> <mt>pmIntervalTxUnicastFrames</mt> <mt>pmIntervalRxEthMaxUtilization</mt> <mt>pmIntervalTxEthMaxUtilization</mt> <mv> <moid>port:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 1st array i want to insert in DB] <r>0</r> <r>0</r> <r>5</r> <r>0</r> </mv> </mi> <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxSES</mt> [this is second array] <mt>pmIntervalRxPowerLevel</mt> <mt>pmIntervalTxPowerLevel</mt> <mv> <moid>client:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 2nd array i want to insert in DB] <r>-79</r> <r>13</r> </mv> </mi> this is the code for one array that i write..i dont know how to write code for two array because the field appear two times and have different data value for each array // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; ........................... i have do a little research on it this is the out come... $i = 0; while( $i < 5) { // Loop through the specified xpath foreach($xml->md->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_uni = $subchild->r[10]; $tx_uni = $subchild->r[11]; $rx_eth = $subchild->r[16]; $tx_eth = $subchild->r[17]; // dump into database; .............................. $i++; if( $i == 5 )break; } } // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; .......................

    Read the article

  • Sending email notifications to users

    - by Web Girl
    What is the preferable way to send email notifications to users? I can do it both ways but what is better? have some c# code that calls stored procedure in the database. Stored procedure based on some logic pulls all the emails data and sends email using database mail or c# code calls stored procedure, gets all the nesessary data back and sends email itself using smtp server etc. I just wonder what is the preferable way in the sense of performance etc... C# code is a library that would be a part of the web application. So it's where it's better to put the load, on the application server or the database server? System will not be crazy busy, it's not like Amazon or something. But still it would be nice to create something that makes sense.

    Read the article

  • GIS-based data visualization and maintenance tool

    - by Dave Jarvis
    Background Looking to leverage an existing GIS system for exploring organizational data. Architecture The following figure represents a high-level overview of the system's desired features: The most basic usage would be as follows: The user visits a web site. The system presents a map (having regions, cities, and buildings). The user drills-down on the map to a particular building. The system provides a basic CRUD interface. The user can view and modify information about personnel (e.g., their assigned teams), equipment (e.g., network appliances), applications, and the building itself (e.g., contact and phone numbers). Ideally, all the components should be open-source (or otherwise free). Problem This must be a small project that needs a quick (but functional) prototype, mostly to confirm whether or not such a system would be useful in the long term. Questions What software components would you use to quickly develop a working prototype? What open-source solutions already exist, if any? Ideas Here is what I am thinking: PostGIS - Define the regions, cities, and sites Google Maps - Display an interactive, clickable map geoJSON - Protocol between PostGIS and Google Maps Seam - CRUD interface Custom Development For example, this would entail: Installation and configuration Configure SSH for remote logins Subversion (or git) PostgreSQL PostGIS Java Tomcat Seam JasperReports Enter GIS information into PostGIS Aggregate data sources into PostgreSQL database Develop starting page for map interface Develop clickable Google Maps interface Develop summary reports Develop CRUD interface using Seam for data maintenance Surely something like this already exists? Thank you!

    Read the article

  • Security of logging people in automatically from another app?

    - by Simon
    I have 2 apps. They both have accounts, and each account has users. These apps are going to share the same users and accounts and they will always be in sync. I want to be able to login automatically from one app to the other. So my solution is to generate a login_key, for example: 2sa7439e-a570-ac21-a2ao-z1qia9ca6g25 once a day. And provide a automated login link to the other app... for example if the user clicks on: https://account_name.securityhole.io/login/2sa7439e-a570-ac21-a2ao-z1qia9ca6g25/user/123 They are logged in automatically, session created. So here we have 3 things that a intruder has to get right in order to gain access; account name, login key, and the user id. Bad idea? Or should I can down the path of making one app an oauth provider? Or is there a better way?

    Read the article

  • Dictionary as DataMember in WCF after installing .NET 4.5 [migrated]

    - by Mauricio Ulate
    After installing .NET Framework 4.5 with Visual Studio 2012, whenever I want to obtain the reference from a WCF service, my dictionaries are changed into arrays. For example, Dictionary<int, double> is changed into ArrayOfKeyValueOfintdoubleKeyValueOfintdouble. This happens in both Visual Studio 2012 and 2010 (both Express). I've reviewed my configuration and the dictionary data type in the service reference configuration is System.Collection.Generic.Dictionary. Changing this doesn't make a difference. Reverting to just using Visual Studio 2010 and .NET 4.0 is not an option.

    Read the article

  • Best way to use GIT to maintain web application template

    - by Darren
    I am a sole developer and I have a web application template that I have created in Visual Studio. I am using GIT for source control, but only on my development machine. Presently I have a master and I create branches for new features, merging them back in to the master as I complete the features. I am at a point now where I am ready to use the template for deployments, and of course I want to continue adding new features via branching/merging. My question is: what would be the typical/recommended way for me to create application deployments based on the master? Should I clone the repository into a new directory that is for a particular web application? Or should I also use branching to do project development based on the main project? The projects would never be merged back into the master. However, it would be nice if I could merge future features into the master and have the ability to incorporate them into previously completed projects if desired. For more specific details of my environment: I am using TortoiseGIT in Windows 7, Visual Studio 2012, ASP.NET Web Pages. Obviously the main differences between deployments would simply be differing pages, CSS files and jQuery scripts. I found this post as I was writing this one. In order to do this should I clone the master repository and checkout from it?

    Read the article

  • Nicest way to map rgb colors from html to led

    - by back_ache
    I have attached an rgb led to a color picker on a webpage and have hit the obvious problem that though the led is 8-bit like html the color rendition is very different so with the more subtle shades the led values for the color are wildly different to the html values. The brute-force method would be for me to have a lookup-table on the webserver to map the two sets of values but would ideally like to do it more elegantly Before I start listing all my 101 ideas for doing this I wondered if anyone else had come across the issue, the end-game would be to be able to abstract the color-rendition of different leds and make it available as a webservice (html value and device id in, led value out)

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Is there a way to add unique items to an array without doing a ton of comparisons?

    - by hydroparadise
    Please bare with me, I want this to be as language agnostic as possible becuase of the languages I am working with (One of which is a language called PowerOn). However, most languanges support for loops and arrays. Say I have the following list in an aray: 0x 0 Foo 1x 1 Bar 2x 0 Widget 3x 1 Whatsit 4x 0 Foo 5x 1 Bar Anything with a 1 should be uniqely added to another array with the following result: 0x 1 Bar 1x 1 Whatsit Keep in mind this is a very elementry example. In reality, I am dealing with 10's of thousands of elements on the old list. Here is what I have so far. Pseudo Code: For each element in oldlist For each element in newlist Compare If values oldlist.element equals newlist.element, break new list loop If reached end of newlist with with nothing equal from oldlist, add value from old list to new list End End Is there a better way of doing this? Algorithmicly, is there any room for improvement? And as a bonus qeustion, what is the O notation for this type of algorithm (if there is one)?

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • How to write good code with new stuff?

    - by Reza M.
    I always try to write easily readable code that is well structured. I face a particular problem when I am messing around with something new. I keep changing the code, structure and so many other things. In the end, I look at the code and am annoyed at how complicated it became when I was trying to do something so simple. Once I've completed something, I refactor it heavily so that it's cleaner. This occurs after completion most of the time and it is annoying because the bigger the code the more annoying it is the rewrite it. I am curious to know how people deal with such agony, especially on big projects shared between many people ?

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • How would I implement this application idea?

    - by Mike Wills
    I am a D&D gamer and a developer that has mostly worked with ASP.NET applications professionally. I have written some chat bots in Node.js and I have only played a little with PHP but wrote nothing serious. I have had inspiration to create a site that allows a person to keep track of characters (aka the character sheet). I am thinking of using this as a learning opportunity to learn noSQL and to write a full javascript front-end. I want this application to save the value as I change it. So if I edit the armor class, it is saved immediately instead of waiting until I hit the submit button. I think that will make it easier to use while gaming and not losing anything because I forgot to save the change. I have never done anything like this. How do you implement this style of application? Is there a tutorial or howto to get me on the right path? While I would really like to use ASP.NET but I don't have a Windows server to publish on (and I really can't afford to pay for a service). What language that runs on Linux would work well for this type of application? Note: I feel noSQL would work in this case because of the sheer number of tables required to create something like this in SQL.

    Read the article

  • How to attach WAR file in email from jenkins

    - by birdy
    We have a case where a developer needs to access the last successfully built WAR file from jenkins. However, they can't access the jenkins server. I'd like to configure jenkins such that on every successful build, jenkins sends the WAR file to this user. I've installed the ext-email plugin and it seems to be working fine. Emails are being received along with the build.log. However, the WAR file isn't being received. The WAR file lives on this path in the server: /var/lib/jenkins/workspace/Ourproject/dist/our.war So I configured it under Post build actions like this: The problem is that emails are sent but the WAR file isn't being attached. Do I need to do something else?

    Read the article

  • C# Interview Preparation - References?

    - by Kanini
    This is a specific question relating to C#. However, it can be extrapolated to other languages too. While one is preparing for an interview of a C# Developer (ASP.NET or WinForms or ), what would be the typical reference material that one should look at? Are there any good books/interview question collections that one should look at so that they can be better prepared? This is just to know the different scenarios. For example, I might be writing SQL Stored Procedures and Queries, but I might stumble when asked suddenly Given an Employee Table with the following column(s). EmployeeId, EmployeeName, ManagerId Write a SQL Query which will get me the Name of Employee and Manager Name? NOTE: I am not asking for a Question Bank so that I can learn by rote what the questions are and reproduce them (which, obviously will NOT work!)

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Finding complexity of a program as a service [on hold]

    - by Seshu
    I would like to find the complexity of a specific code chunk written in Java. Is there a place/web site/service where I can find out the complexity of any arbitrary program. This program might include loops/recursion. Using theory we can compute complexity ourselves. But, just curious in finding if any service is out there to find such complexity. We have several code quality related tools does any of such tools will also find complexity of given code? Could any one point me or direct me to such a utility/site/service?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • "Dedication of the Harvard Mark I computer, 1944 August 7"- Which text is Brooks referring to and where can I find it?

    - by JW01
    I am reading the epilogue of the Anniversary Edition of the Mythical Man Month. The author, Frederic Brooks, says Still vivid in my mind is the wonder and delight with which I - then 13 years old - read the account of the August 7, 1944, dedication of the Harvard Mark I computer... Which text is he referring to? I want to be filled with wonder and delight too. Where can get hold of this text so that I can read it too?

    Read the article

  • How to prepare for the GRE Computer Science Subject Test?

    - by Maddy.Shik
    How do I prepare for the GRE Computer Science subject test? Are there any standard text books I should follow? I want to score as competitively as possible. What are some good references? Is there anything that top schools like CMU, MIT, and Standford would expect? For example, Cormen et al is considered very good for algorithms. Please tell me standard text books for each subject covered by the test, like Computer Architecture, Database Design, Operating Systems, Discrete Maths etc.

    Read the article

  • Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    Hello. I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how I might try to make the distinction when talking to someone about Project A and Project B: "ProjectA is a managed .NET C++ project" and ProjectB is an unmanaged Visual C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • EF4 generates invalid script

    - by Jaxidian
    When I right-click in a .EDMX file and click Generate Database From Model, the resulting script is obviously wrong because of the table names. What it generates is the following script. Note the table names in the DROP TABLE part versus the CREATE TABLE part. Why is this inconsistent? This is obviously not a reusable script. What I created was an Entity named "Address" and an Entity named "Company", etc (all singular). The EntitySet names are pluralized. The "Pluralize New Objects" boolean does not change this either. So what's the deal? For what it's worth, I originally generated the EDMX by pointing it to a database that had tables with non-pluralized names and now that I've made some changes, I want to go back the other way. I'd like to have the option to go back and forth as neither the db-first nor the model-first model is ideal in all scenarios, and I have the control to ensure that there will be no merging issues from multiple people going both ways at the same time. -- -------------------------------------------------- -- Dropping existing FOREIGN KEY constraints -- NOTE: if the constraint does not exist, an ignorable error will be reported. -- -------------------------------------------------- ALTER TABLE [Address] DROP CONSTRAINT [FK_Address_StateID-State_ID]; GO ALTER TABLE [Company] DROP CONSTRAINT [FK_Company_AddressID-Address_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_BossEmployeeID-Employee_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_CompanyID-Company_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_PersonID-Person_ID]; GO ALTER TABLE [Person] DROP CONSTRAINT [FK_Person_AddressID-Address_ID]; GO -- -------------------------------------------------- -- Dropping existing tables -- NOTE: if the table does not exist, an ignorable error will be reported. -- -------------------------------------------------- DROP TABLE [Address]; GO DROP TABLE [Company]; GO DROP TABLE [Employee]; GO DROP TABLE [Person]; GO DROP TABLE [State]; GO -- -------------------------------------------------- -- Creating all tables -- -------------------------------------------------- -- Creating table 'Addresses' CREATE TABLE [Addresses] ( [ID] int IDENTITY(1,1) NOT NULL, [StreetAddress] nvarchar(100) NOT NULL, [City] nvarchar(100) NOT NULL, [StateID] int NOT NULL, [Zip] nvarchar(10) NOT NULL ); GO -- Creating table 'Companies' CREATE TABLE [Companies] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'People' CREATE TABLE [People] ( [ID] int IDENTITY(1,1) NOT NULL, [FirstName] nvarchar(100) NOT NULL, [LastName] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'States' CREATE TABLE [States] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [Abbreviation] nvarchar(2) NOT NULL ); GO -- Creating table 'Employees' CREATE TABLE [Employees] ( [ID] int IDENTITY(1,1) NOT NULL, [PersonID] int NOT NULL, [CompanyID] int NOT NULL, [Position] nvarchar(100) NOT NULL, [BossEmployeeID] int NULL ); GO -- -------------------------------------------------- -- Creating all PRIMARY KEY constraints -- -------------------------------------------------- -- Creating primary key on [ID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [PK_Addresses] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [PK_Companies] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [PK_People] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'States' ALTER TABLE [States] ADD CONSTRAINT [PK_States] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [PK_Employees] PRIMARY KEY ([ID] ); GO -- -------------------------------------------------- -- Creating all FOREIGN KEY constraints -- -------------------------------------------------- -- Creating foreign key on [StateID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [FK_Address_StateID_State_ID] FOREIGN KEY ([StateID]) REFERENCES [States] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Address_StateID_State_ID' CREATE INDEX [IX_FK_Address_StateID_State_ID] ON [Addresses] ([StateID]); GO -- Creating foreign key on [AddressID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [FK_Company_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Company_AddressID_Address_ID' CREATE INDEX [IX_FK_Company_AddressID_Address_ID] ON [Companies] ([AddressID]); GO -- Creating foreign key on [AddressID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [FK_Person_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Person_AddressID_Address_ID' CREATE INDEX [IX_FK_Person_AddressID_Address_ID] ON [People] ([AddressID]); GO -- Creating foreign key on [CompanyID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_CompanyID_Company_ID] FOREIGN KEY ([CompanyID]) REFERENCES [Companies] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_CompanyID_Company_ID' CREATE INDEX [IX_FK_Employee_CompanyID_Company_ID] ON [Employees] ([CompanyID]); GO -- Creating foreign key on [BossEmployeeID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_BossEmployeeID_Employee_ID] FOREIGN KEY ([BossEmployeeID]) REFERENCES [Employees] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_BossEmployeeID_Employee_ID' CREATE INDEX [IX_FK_Employee_BossEmployeeID_Employee_ID] ON [Employees] ([BossEmployeeID]); GO -- Creating foreign key on [PersonID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_PersonID_Person_ID] FOREIGN KEY ([PersonID]) REFERENCES [People] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_PersonID_Person_ID' CREATE INDEX [IX_FK_Employee_PersonID_Person_ID] ON [Employees] ([PersonID]); GO -- -------------------------------------------------- -- Script has ended -- --------------------------------------------------

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >