Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 156/663 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Regarding Microsoft MVC framework and usage [closed]

    - by Thomas
    it will be better if some one tell me that what type of web application should develop using Microsoft MVC framework. i am familiar with web form but not familiar with MS MVC framework. i feel any type of web application can be developed with web form. i search google lot to know the specific reason for using MS MVC framework. i am keen interested to know when i should develop web apps using MS MVC framework and when i should use web form. i will be happy if some one discuss this issue in detail. thanks

    Read the article

  • What you don't like in your web-framework of "choice"?

    - by 0101
    Most of the time we don't have a choice were it comes to web-frameworks, in Java every company is using a different one(big thanks to web-framework developers - you will burn in hell). However now I have a choice of picking which framework we will use, I will probably pick the one I know the best since I know how to by-pass its downfalls. In every comparation we will only see what is good in that frameworks and any downfalls will be swept under the carpet. What are the downfalls of most known frameworks?

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • Representing and executing simple rules - framework or custom?

    - by qtips
    I am creating a system where users will be able to subscribe to events, and get notified when the event has occured. Example of events can be phone call durations and costs, phone data traffic notations, and even stock rate changes. Example of events: customer 13532 completed a call with duration 11:45 min and cost $0.4 stock rate for Google decreased with 0.01% Customers can subscribe to events using some simple rules e.g. When stock rate of Google decreases more than 0.5% When the cost of a call of my subscription is over $1 Now, as the set of different rules is currently predefined, I can easily create a custom implemention that applies rules to an event. But if the set of rules could be much larger, and if we also allow for custom rules (e.g. when stock rate of Google decreses more than 0.5% AND stock rate of Apple increases with 0.5%), the system should be able to adapt. I am therefore thinking of a system that can interpret rules using a simple grammer and then apply them. After som research I found that there exists rule-based engines that can be used, but I am unsure as they seem too complicated and may be a little overkill for my situation. Is there a Java framework suited for this area? Should we use framework, a rule engine, or should we create something custom? What are the pros and cons?

    Read the article

  • Why should I use Zend_Application?

    - by Billy ONeal
    I've been working on a Zend Framework application which currently does a bunch of things through Zend Application and a few resource plugins written for it. However, looking at this codebase now, it seems to me that using Zend_Application just makes things more complicated; and a plain, more "traditional" bootstrap file would do a better job of being transparent. This is even more the case because the individual components of Zend -- Zend_Controller, Zend_Navigation, etc. -- don't reference Zend_Application at all. Therefore they do things like "Well just call setRoute and be on your way," and the user is left scratching their head as to how to implement that in terms of the application.ini configuration file. This is not to say that one can't figure out what's going on by doing spelunking through the ZF source code. My problem with that approach is that it's to easy to depend on something that's an implementation detail, rather than a contract, and that all it seems to do is add an extra layer of indirection that one must wade through to understand an application. I look at pre ZF 1.8 example code, before Zend_Application existed, and everywhere I see plain bootstrap files that setup the MVC framework and get on their way. The code is clear and easy to understand, even if it is a bit repetitive. I like the DRY concept that Application gets you, but particularly when I'm assuming first people looking at the app's code aren't really familiar with Zend at all, I'm considering blowing away any dependence I have on Zend_Application and returning to a traditional bootstrap file. Now, my concern here is that I don't have much experience doing this, and I don't want to get rid of Zend_Application if it does something particularly important of which I am unaware, or something of that nature. Is there a really good reason I should keep it around?

    Read the article

  • Idea of an algorithm to detect a website's navigation structure?

    - by Uwe Keim
    Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not that satisfied with this idea. My question: Can you imagine any algorithm when it comes to detecting such a structure? Update 1: What I'm looking for is not a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. Update 2: As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure.

    Read the article

  • Codeigniter Open Source Applications for Coding References

    - by Hafizul Amri
    I choose codeigniter to be my first framework to work with. Now I'm looking for open source applications built based on this framework to be my references for good coding practices and standards. From my previous experiences in application development, it was hard for me to maintain, upgrade, or modify existing applications due to my bad coding practices. Do you have any suggestion of application based on codeigniter framework for me to be referred? So that it can help me to write better coding by referring to their good and maybe best coding pratices. Thank you!

    Read the article

  • Wisdom of using open source code in a commercial software product

    - by Mr. Jefferson
    I'm looking at using some open source code in my ASP.NET web app (specifically dapper). Management is not a fan, because open source is seen as a risk that has bitten us before. Apparently previous developers have had to rewrite things after having open-source components fail. The pros seem to be: It does a lot of stuff for me that would otherwise involve either lots of boilerplate code or Microsoft's recommended but slower solution (Entity Framework). Cons: It's complex enough that if it were to fail suddenly in production, I would be hard pressed to fix it. However, it's in use on a much higher-traffic site than mine, so I don't think it'll end up being a high risk portion of the project. What is the consensus here? Is it unwise to use open source code in my project that I don't know/understand as well as I do my own code?

    Read the article

  • Most efficient way to rebuild a tree structure from data

    - by Ahsan
    Have a question on recursively populating JsTree using the .NET wrapper available via NuGet. Any help would be greatly appreciated. the .NET class JsTree3Node has a property named Children which holds a list of JsTree3Nodes, and the pre-existing table which contains the node structure looks like this NodeId ParentNodeId Data AbsolutePath 1 NULL News /News 2 1 Financial /News/Financial 3 2 StockMarket /News/Financial/StockMarket I have a EF data context from the the database, so my code currently looks like this. var parentNode = new JsTree3Node(Guid.NewGuid().ToString()); foreach(var nodeItem in context.Nodes) { parentNode.Children.Add(nodeItem.Data); // What is the most efficient logic to do this recursively? } as the inline comment says in the above code, what would be the most efficient way to load the JStree data on to the parentNode object. I can change the existing node table to suite the logic so feel free to suggest any changes to improve performance.

    Read the article

  • Why should you document code?

    - by Edwin Tripp
    I am a graduate software developer for a financial company that uses an old COBOL-like language/flat-file record storage system. The code is completely undocumented, both code comments and overall system design and there is no help on the web (unused outside the industry). The current developers have been working on the system for between 10 and 30 years and are adamant that documentation is unnecessary as you can just read the code to work out what's going on and that you can't trust comments. Why should such a system be documented?

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • how to develop a common pool of functions?

    - by user975234
    I need to develop an application which runs on the web as well as on mobile platform. I want to make something like a directory where i hold my common functions in respect to web and mobile platform. This is the diagram which describes what i exactly want: I just want to know how do i implement this thing? If you can help me with the technical details that would be great! P.S: I am new to web app and stuff!

    Read the article

  • HTML Parsing for multiple input files using java code [closed]

    - by mkp
    FileReader f0 = new FileReader("123.html"); StringBuilder sb = new StringBuilder(); BufferedReader br = new BufferedReader(f0); while((temp1=br.readLine())!=null) { sb.append(temp1); } String para = sb.toString().replaceAll("<br>","\n"); String textonly = Jsoup.parse(para).text(); System.out.println(textonly); FileWriter f1=new FileWriter("123.txt"); char buf1[] = new char[textonly.length()]; textonly.getChars(0,textonly.length(),buf1,0); for(i=0;i<buf1.length;i++) { if(buf1[i]=='\n') f1.write("\r\n"); f1.write(buf1[i]); } I've this code but it is taking only one file at a time. I want to select multiple files. I've 2000 files and I've given them numbering name from 1 to 2000 as "1.html". So I want to give for loop like for(i=1;i<=2000;i++) and after executing separate txt file should be generated.

    Read the article

  • Recaptcha php problem [closed]

    - by Sam Gabriel
    Hey guys, I'm using recaptcha and I've got a problem, when a user clicks the signup button it redirects him to the sign up verification page, and here is the code, found on the very top of the web code, that checks the recaptcha entered data. <?php require_once('recaptchalib.php'); $privatekey = "***"; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { header('location: signup.php'); } ?> But it seems whatever I type in in the recaptcha box, be it right or wrong, I get redirected to the signup.php page. Here is the recaptcha code in the signup.php page: <?php ini_set('display_errors', 'On'); error_reporting(E_ALL | E_STRICT); require_once('recaptchalib.php'); $publickey = "***"; echo recaptcha_get_html($publickey); ?>

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • Objective C and ios development training courses feedback

    - by Mutuelinvestor
    I'm thinking about taking an Objective C / iphone/ipad development training course. Pragmatic Studio and Big Nerd Ranch appear to be the key players. I'd love to hear any feed back from anyone who has completed training with Pragmatic, Big Nerd or others. I'd interested in quality of instruction, how much you learned, strengths, weaknesses and any other valuable insights. Its a pretty big financial commitment for me and I want to get the most bang for my buck. Thanks in advance for your input.

    Read the article

  • Vectorization of matlab code for faster execution

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Running time for 50 images:Elapsed time is 103.947573 seconds. QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong?

    Read the article

  • How to store generated eigen faces for future face recognition?

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong? 2.Instead of running the training set everytime how eigen faces generated are stored so that stored eigen faces are used for future face recoginition for a new input image.So it reduces wastage of time.

    Read the article

  • Is it ok to replace optimized code with readable code?

    - by Coder
    Sometimes you run into a situation where you have to extend/improve an existing code. You see that the old code is very lean, but it's also difficult to extend, and takes time to read. Is it a good idea to replace it with modern code? Some time ago I liked the lean approach, but now, it seems to me that it's better to sacrifice a lot of optimizations in favor of higher abstractions, better interfaces and more readable, extendable code. The compilers seem to be getting better as well, so things like struct abc = {} are silently turned into memsets, shared_ptrs are pretty much producing the same code as raw pointer twiddling, templates are working super good because they produce super lean code, and so on. But still, sometimes you see stack based arrays and old C functions with some obscure logic, and usually they are not on the critical path. Is it good idea to change such code if you have to touch a small piece of it either way?

    Read the article

  • MBO in software development

    - by Euphoric
    I just found out our middle sized software development company is going to implement MBO. As a big fan of Agile, I see this as counter-intuitive, because I believe it is impossible to create a objective, that is measurable. Especialy where creativity, experience, knowledge and profesionalism are important traits and expanding those are objectives and goals of many. So my question is, what is your opinion on using MBO in software development? And if there are development companies using MBO, either successfuly or unsucessfuly.

    Read the article

  • C++ Library API Design

    - by johannes
    I'm looking for a good resource for learning about good API design for C++ libraries, looking at shared objects/dlls etc. There are many resources on writing nice APIs, nice classes, templates and so on at source level, but barely anything about putting things together in shared libs and executables. Books like Large-Scale C++ Software Design by John Lakos are interesting but massively outdated. What I'm looking for is advice i.e. on handling templates. With templates in my API I often end up with library code in my executable (or other library) so if I fix a bug in there I can't simply roll out the new library but have to recompile and redistribute all clients of that code. (and yes, I know some solutions like trying to instantiate at least the most common versions inside the library etc.) I'm also looking for other caveats and things to mind for keeping binary compatibility while working on C++ libraries. Is there a good website or book on such things?

    Read the article

  • Developing Web Portal

    - by Ya Basha
    I'm php, Ruby on Rails and HTML5 developer I need some advises and suggestions for a web portal project that I will build from scratch. This is my first time to build a web portal, Which developing scripting language you prefer and why? and how I should start planing my project as it will contains many modules. I'm excited to start building this project and I want to build it in the right way with planing, if you know some web resources that help me decide and plan my project please give them to me. Best Regards,

    Read the article

  • Setter Validation can affect performance?

    - by TiagoBrenck
    Whitin a scenario where you use an ORM to map your entities to the DB, and you have setter validations (nullable, date lower than today validation, etc) every time the ORM get a result, it will pass into the setter to instance the object. If I have a grid that usually returns 500 records, I assume that for each record it passes on all validations. If my entity has 5 setter validations, than I have passed in 2.500 validations. Does those 2.500 validations will affect the performance? If was 15.000 validation, it will be different? In my opinion, and according to this answer (http://stackoverflow.com/questions/4893558/calling-setters-from-a-constructor/4893604#4893604), setter validation is usefull than constructors validation. Is there a way to avoid unecessary validation, since I am safe that the values I send to DB when saving the entity wont change until I edit it on my system?

    Read the article

  • When should I make the first commit to source control?

    - by Kendall Frey
    I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete' and primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what all could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?

    Read the article

  • Should a view and a model communicate or not?

    - by Stefano Borini
    According to the wikipedia page for the MVC architecture, the view is free to be notified by the model, and is also free to query the model about its current state. However, according to Paul Hegarty's course on iOS 5 at Stanford, lecture 1, page 18 all interaction must go through the controller, with Model and View that are never supposed to know each other. It is not clear to me if Hegarty's statement must be intended as a simplification for the course, but I am tempted to say that he intends the design as such. How do you explain these two opposite points of view ?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >