Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 155/663 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • How to document/verify consistent layering?

    - by Morten
    I have recently moved to the dark side: I am now a CUSTOMER of software development -- mainly websites. With this new role comes new concerns. As a programmer i know how solid an application becomes when it is properly layered, and I want to use this knowledge in my new job. I don't want business logic in my presentation layer, and certainly not presentation stuff in my data layer. Thus, I want to be able to demand from my supllier that they document the level of layering, and how neat and consistent the layering is. The big question is: How is the level of layering documented to me as a customer, and is that a reasonable demmand for me to have, so I don't have to look in the code (I'm not supposed to do that anymore)?

    Read the article

  • Why should I use Zend_Application?

    - by Billy ONeal
    I've been working on a Zend Framework application which currently does a bunch of things through Zend Application and a few resource plugins written for it. However, looking at this codebase now, it seems to me that using Zend_Application just makes things more complicated; and a plain, more "traditional" bootstrap file would do a better job of being transparent. This is even more the case because the individual components of Zend -- Zend_Controller, Zend_Navigation, etc. -- don't reference Zend_Application at all. Therefore they do things like "Well just call setRoute and be on your way," and the user is left scratching their head as to how to implement that in terms of the application.ini configuration file. This is not to say that one can't figure out what's going on by doing spelunking through the ZF source code. My problem with that approach is that it's to easy to depend on something that's an implementation detail, rather than a contract, and that all it seems to do is add an extra layer of indirection that one must wade through to understand an application. I look at pre ZF 1.8 example code, before Zend_Application existed, and everywhere I see plain bootstrap files that setup the MVC framework and get on their way. The code is clear and easy to understand, even if it is a bit repetitive. I like the DRY concept that Application gets you, but particularly when I'm assuming first people looking at the app's code aren't really familiar with Zend at all, I'm considering blowing away any dependence I have on Zend_Application and returning to a traditional bootstrap file. Now, my concern here is that I don't have much experience doing this, and I don't want to get rid of Zend_Application if it does something particularly important of which I am unaware, or something of that nature. Is there a really good reason I should keep it around?

    Read the article

  • Recaptcha php problem [closed]

    - by Sam Gabriel
    Hey guys, I'm using recaptcha and I've got a problem, when a user clicks the signup button it redirects him to the sign up verification page, and here is the code, found on the very top of the web code, that checks the recaptcha entered data. <?php require_once('recaptchalib.php'); $privatekey = "***"; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { header('location: signup.php'); } ?> But it seems whatever I type in in the recaptcha box, be it right or wrong, I get redirected to the signup.php page. Here is the recaptcha code in the signup.php page: <?php ini_set('display_errors', 'On'); error_reporting(E_ALL | E_STRICT); require_once('recaptchalib.php'); $publickey = "***"; echo recaptcha_get_html($publickey); ?>

    Read the article

  • Books for MCSD and advice

    - by Mahesha999
    Hi there I am thinking to get certification done for MCSD. http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx I did found the books for first exam 480 comprising CSS3, HTML5 and JavaScript. However I did not found books for other exams: 486: ASP.NET MVC 4.5 Apps - Will ASP.NET 4 books suffice for this? Should I also learn Web Forms though I have considerable part of it. 487: Windows Azure and Web Services - What book should I use? I seems that the syllabus is too huge and will take considerable time. Anyone suggesting any advice to complete such exams, since this is going to be my first such. How should I prepare? Should I give this exam? Will it help? Sorry I know I asked many questions here in one questions - a bad practice but the books-question is a big concern for me.

    Read the article

  • Can the csv format be defined by a regex?

    - by Spencer Rathbun
    A colleague and I have recently argued over whether a pure regex is capable of fully encapsulating the csv format, such that it is capable of parsing all files with any given escape char, quote char, and separator char. The regex need not be capable of changing these chars after creation, but it must not fail on any other edge case. I have argued that this is impossible for just a tokenizer. The only regex that might be able to do this is a very complex PCRE style that moves beyond just tokenizing. I am looking for something along the lines of: ... the csv format is a context free grammar and as such, it is impossible to parse with regex alone ... Or am I wrong? Is it possible to parse csv with just a POSIX regex? For example, if both the escape char and the quote char are ", then these two lines are valid csv: """this is a test.""","" "and he said,""What will be, will be."", to which I replied, ""Surely not!""","moving on to the next field here..."

    Read the article

  • Free NOSQL database for use with C# client [closed]

    - by Mitten
    I've never used NOSQL databases before, but so far it seems like the best data storage solution for my project. I am going to implement a datamining application. The data I would like to mine is thousands of documents which cannot be imported into datamining applications. To make to import easier and faster (than importing thousands of documents) I am planning to import these documents into a NOSQL database first and when import NOSQL database into datamining software. At the very least once I have all the data in NOSQL database I should be able to code simplest datamining logic myself. Am I correct that NOSQL databases allow to creates records of data, but they don't mandate all the records to adhere to the same data schema (same column names/types in a classic table oriended SQL databases)? I think for each document I would create a row/entry/object (not sure what is the correct term is in use in NOSQL world) which would be a string id, few (columns) with unstructured text data, and a dozens of columns mostly of datetime and integer types. From its name NOSQL does not support SQL query syntax, but it support locating the object(row/entry?) by its unique id. Does NOSQL support qyuering objects using property=value syntax? Unfortunately most of free NOSQL db only support Java/C++ clients, which free NOSQL db would you recommend for a C# programmer?

    Read the article

  • Does learning to develop for iOS create a lock-in?

    - by Jungle Hunter
    If I begin my career (first job) with developing on the iOS platform, does that lock me in into iOS and Mac OS X development only? By locking me in I mean will that create barriers for me to switch technologies as I would be mainly working with Objective-C. If yes, does that make my career choices limited? I'm interested in comparing this with Android development, which if pursued will leave me with Java skills (correct me if I'm wrong) which I can use elsewhere.

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • Increase Performance of VS 2010 by using a SSD

    - by System.Data
    After searching on the internet for performance improvements when using Visual Studio 2010 with a solid state hard drive, I heard a lot of different opinions. A lot of people said that there isn't really a benefit when using a SSD, but in contrast others said the exact opposite. I am a bit confused with the contrasting opinions and I cannot really make a decision whether buying a SSD would make a difference. What are your experiences with this issue and which SSD did you use?

    Read the article

  • Handling permissions in a MVP application

    - by Chathuranga
    In a windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows (permission based) as basically its implementation should be less complicated and straight forward. NOTE : System could be simultaneously used by few users (maximum 3) and the database is at the server side. This is my UserModel. Each user has a list of permissions given for them. class User { string UserID { get; set; } string Name { get; set; } string NIC {get;set;} string Designation { get; set; } string PassWord { get; set; } List <string> PermissionList = new List<string>(); bool status { get; set; } DateTime EnteredDate { get; set; } } When user login to the system it will keep the current user in memory. For example in BankAccountDetailEntering view I control the controller permission as follows. public partial class BankAccountDetailEntering : Form { bool AccountEditable {get; set;} private void BankAccountDetailEntering_Load(object sender, EventArgs e) { cmdEditAccount.enabled = false; OnLoadForm (sender, e); // Event fires... If (AccountEditable ) { cmdEditAccount.enabled=true; } } } In this purpose my all relevant presenters (like BankAccountDetailPresenter) should aware of UserModel as well in addition to the corresponding business Model it is presenting to the View. class BankAccountDetailPresenter { BankAccountDetailEntering _View; BankAccount _Model; User _UserModel; DataService _DataService; BankAccountDetailPresenter( BankAccountDetailEntering view, BankAccount model, User userModel, DataService dataService ) { _View=view; _Model = model; _UserModel = userModel; _DataService = dataService; WireUpEvents(); } private void WireUpEvents() { _View.OnLoadForm += new EventHandler(_View_OnLoadForm); } private void _View_OnLoadForm(Object sender, EventArgs e) { foreach(string s in _UserModel.PermissionList) { If( s =="CanEditAccount") { _View.AccountEditable =true; return; } } } public Show() { _View.ShowDialog(); } } So I'm handling the user permissions in the presenter iterating through the list. Should this be performed in the Presenter or View? Any other more promising ways to do this? Thanks.

    Read the article

  • Most efficient way to rebuild a tree structure from data

    - by Ahsan
    Have a question on recursively populating JsTree using the .NET wrapper available via NuGet. Any help would be greatly appreciated. the .NET class JsTree3Node has a property named Children which holds a list of JsTree3Nodes, and the pre-existing table which contains the node structure looks like this NodeId ParentNodeId Data AbsolutePath 1 NULL News /News 2 1 Financial /News/Financial 3 2 StockMarket /News/Financial/StockMarket I have a EF data context from the the database, so my code currently looks like this. var parentNode = new JsTree3Node(Guid.NewGuid().ToString()); foreach(var nodeItem in context.Nodes) { parentNode.Children.Add(nodeItem.Data); // What is the most efficient logic to do this recursively? } as the inline comment says in the above code, what would be the most efficient way to load the JStree data on to the parentNode object. I can change the existing node table to suite the logic so feel free to suggest any changes to improve performance.

    Read the article

  • Which Java web framework do you recommend for intranet webapp (not content website)?

    - by pregzt
    I'm about to start development of small purpose build intranet web application for small software vendor. It will be administration console of the server managing licenses for off-the-shelf software installed by users. There will be a few users who need to be able to sign in, issue a batch of license codes, revoke some, renew outdated, resolve issues, etc. Bear in mind that my customer requires Java for this solution. I'm seasoned Java programmer and before I used different frameworks to implement webapps, mainly Apache Struts in the past and Spring MVC recently. I was wondering what else could you recommend for such specific intranet webapp. I looked at using Google Web Toolkit (possibly with SmartGWT) Ext JS for fancy widgets in UI and REST back-end in SpringMVC SpringMVC with JQueryUI Could you please think of any piece of recommendation with regard to the choice I'm going to made?

    Read the article

  • Algorithm to optimize grouping

    - by Jeroen
    I would like to know if there's a known algorithm or best practice way to do the following: I have a collection with a subcollection, for example: R1 R2 R3 -- -- -- M M M N N L L A What i need is an algorithm to get the following result: R1, R2: M N L R2: A R3: M This is -not- what i want, it has more repeating values for R than the above: R1, R2, R3: M R1, R2: N L R2: A I need to group in way that i get the most optimized groups of R. The least amount of groups of R the better so i get the largest sub collections. Another example (with the most obvious result): R1 R2 R3 -- -- -- M M A V V B L L C Should result in: R1, R2: M V L R3: A B C I need to do this in LINQ/C#. Any solutions? Tips? Links?

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • Is it normal needing time to understand code i wrote recently

    - by user1478167
    By recently i mean some weeks ago. I am trying to continue a project i left 2 weeks ago and i need time to understand some functions i wrote(not copied from somewhere) and it takes me time. Normally i don't need to because my functions,methods etc are black boxes but when i need to change something it's really hard. Does this mean i write bad code? I am still in school and i am the only who writes/uses the code so i don't have feedback, but i am afraid that if it is difficult for me to understand it, it would be 10 times more difficult for someone else. What should i do? I write a lot of comments but most of the time are useless when reviewing. Do you have any suggestions?

    Read the article

  • How popular is ITIL in the rest of the world?

    - by Oz123
    I am sorry if this question is not 100% Programming wise, I just didn't know where to ask. Consider yourself lucky if you don't know what ITIL is. You can understand from my tone I don't like it - I find ITIL the complete opposite of how IT Company should work, being too bureaucratic and complicated. In Germany, where I work, it seems to be very popular, and I have been asked in several job interviews if I know ITIL. Do you know popular is it in the rest of the world? Should I worry about ITIL or I can snub it? I must also ask my European colleagues - Why do you think is ITIL so popular? Is there a strong empirical evidence that ITIL does work? By empirical, I mean not personal experiences of the kind "We are a company that is working with ITIL...". I can hardly imagine a multi-million dollar company like Apple or Google work with ITIL, but I can also hardly see how it can benefit small companies...

    Read the article

  • How to store generated eigen faces for future face recognition?

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong? 2.Instead of running the training set everytime how eigen faces generated are stored so that stored eigen faces are used for future face recoginition for a new input image.So it reduces wastage of time.

    Read the article

  • RMS java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have custom extension on top of SmartGWT but after some time it has proven not to be enough flexible. I also personally dislike that java-js compilation process and the whole GWT codebase. Not only its ugly designed, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet) I only does have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it kinda hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced, what was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • Should each app have its own database, or should small apps be merged into one?

    - by King
    We have a bunch of small to medium sized apps, each of which has its own database (MSSQL Server). There was a suggestion that we consoldate the 'related' databases into a smaller set amount of larger databases. They don't particularly share a lot of data, they would just be under a similar business group. For example, using a 'Finance' DB to hold the tables and procedures for finance apps. Would it be appropriate to use a different schema for each app? E.g. App1.SomeTable App1.SomeOtherTable AppTwo.SomeTable What are the pros and cons of this approach? What should I watch out for? Thanks

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

  • Knowledge Transfer without a Plan

    - by Kanini
    Hello...We are doing work for a particular client managing their CRM implementation. (The CRM itself is a product which has been largely customized to suit my client's needs). Now, they want us to manage the Oracle batch jobs/ETL as well. And for this, they are ready to provide us with Knowledge Transfer. (The Oracle batch jobs/ETL is managed in-house by the client now). After much persuasion, I got one of the Project Lead (designation-wise) to email the client asking for a KT Plan. (The Project Lead kept saying that they have never had KT plans before and all that for which I offered I will draft a template and even that was rejected!). Email from us to them - Can you please share with us the KT Plan? Response from them - Not sure what is expected from my side? The KT is planned for tomorrow from 11 am onwards where Functional knowledge of existing ETL Data migration package will be shared. How do you handle such a client? Most likely what is going to happen is this. The person who is giving the KT will say that I have given complete Knowledge Transfer and we will go back and say that "No, this was not covered. For this, they provided an overview alone and left it at that!" and so on... My Project Lead also did not respond to that email. He just said that the meeting is scheduled to happen at 11 AM (basically repeating whatever the email said and left for the day!). What could I possibly do? PS: Look for another job is a very helpful answer, but I am not looking for it. :-)

    Read the article

  • Security in a private web service

    - by Oni
    I am developing a web site and a web service for a small on-line game. Technically, I'll be using Express (node.js) and MongoDB+Redis for the databases. This the structure I came up with: One Express server that will server as the Web Service. This will connect to the databases. One Express server that will provide the web site. It will connect to the Web Service to retrieve and push the information. iOS and Android application will be able to interact with the WebService. Taking into account: It is a small game. The information transferred is not critical. There will NOT be third party applications. At least for the moment. My concern is about which level of security I should use in each of the scenarios: Security of the user playing through web browser Security of the applications and the Web Server connecting to the WS. I have take a look at the different options and: OAuth and/or Https is too much for this scenario, isn't it? Will be a good option to hash the user and password with MD5(or similar) and some salt? I would like to get some directions and investigate by my own rather than getting a response like "you should you use this node.js module..." Thanks in advance,

    Read the article

  • What you don't like in your web-framework of "choice"?

    - by 0101
    Most of the time we don't have a choice were it comes to web-frameworks, in Java every company is using a different one(big thanks to web-framework developers - you will burn in hell). However now I have a choice of picking which framework we will use, I will probably pick the one I know the best since I know how to by-pass its downfalls. In every comparation we will only see what is good in that frameworks and any downfalls will be swept under the carpet. What are the downfalls of most known frameworks?

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >