Search Results

Search found 30146 results on 1206 pages for 'back to the future'.

Page 50/1206 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • PHP+MySQL Update TimeStamp and get NOW() back

    - by Ben
    Is it possible to merge these two mysql queries into one? I want to get NOW() returned to a php variable. mysql_query('INSERT INTO translate (IDRef, RefType, Lang, Text, LastChangeTS) VALUES ('.$id.', \''.$reftype.'\', \''.$lang.'\', \''.$text.'\', NOW()) ON DUPLICATE KEY UPDATE text = \''.$text.'\', LastChangeTS = NOW()'); mysql_query('SELECT LastChangeTS FROM translate WHERE IDRef = '.$id.' AND RefType = \''.$reftype.'\' AND Lang = \''.$lang.'\'');

    Read the article

  • How to copy the memeory allocated in device function back to main memory

    - by xhe8
    I have a CUDA program containing a host function and a device function Execute(). In the host function, I allocate a global memory output which will then be passed to the device function and used to store the address of the global memory allocated within the device function. I want to access the in-kernel allocated memory in the host function. The following is the code: #include <stdio.h> typedef struct { int * p; int num; } Structure_A; \__global__ void Execute(Structure_A *output); int main(){ Structure_A *output; cudaMalloc((void***)&output,sizeof(Structure_A)*1); dim3 dimBlockExecute(1,1); dim3 dimGridExecute(1,1); Execute<<<dimGridExecute,dimBlockExecute>>>(output); Structure_A * output_cpu; int * p_cpu; cudaError_t err; output_cpu= (Structure_A*)malloc(1); err=cudaMemcpy(output_cpu,output,sizeof(Structure_A),cudaMemcpyDeviceToHost); if( err != cudaSuccess) { printf("CUDA error a: %s\n", cudaGetErrorString(err)); exit(-1); } p_cpu=(int *)malloc(1); err=cudaMemcpy(p_cpu,output_cpu[0].p,sizeof(int),cudaMemcpyDeviceToHost); if( err != cudaSuccess) { printf("CUDA error b: %s\n", cudaGetErrorString(err)); exit(-1); } printf("output=(%d,%d)\n",output_cpu[0].num,p_cpu[0]); return 0; } \__global__ void Execute(Structure_A *output){ int thid=threadIdx.x; output[thid].p= (int*)malloc(thid+1); output[thid].num=(thid+1); output[thid].p[0]=5; } I can compile the program. But when I run it, I got a error showing that there is a invalid argument in the following memory copy function. "err=cudaMemcpy(p_cpu,output_cpu[0].p,sizeof(int),cudaMemcpyDeviceToHost);" CUDA version is 4.2. CUDA card: Tesla C2075 OS: x86_64 GNU/Linux

    Read the article

  • Flex: cannot resize player back from Full Screen

    - by Patrick
    hi, The key event is not listened by my Flex app. Since it is really simple code, I cannot understand where the problem is... init() { stage.addEventListener(KeyboardEvent.KEY_DOWN, escHandler); } private function escHandler(event:KeyboardEvent):void { debugF.text = "ESC pressed"; } thanks

    Read the article

  • The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad.

    - by mseika
    Announcing JD Edwards Tools JD EdwardsLatest and Greatest Live Demo and Webcast of the New Applications User Interface & Tools on Apple iPad Tuesday December 6, 2011 at 8:00 a.m. Pacific Click here to register Oracle’s JD Edwards Development Team just completed an exciting new EnterpriseOne User Interface and a massive number of feature innovations for users and system administrators. We are looking forward to demonstrating the new User Interface and Tools. We have a panel of experts lined up just for you and we will be sure to answer all your questions. Lyle Ekdahl – Oracle Group Vice President Gary Grieshaber – Oracle Strategy Senior Director Brian Stanz – Oracle Development Senior Director The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad. Please join us for this important webcast and see why we are so excited about these cool tools that make your work more mobile and efficient. Click here to register for the live webcast on Tuesday, December 6th, 2011 at 8:00 a.m. Pacific time! Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Should I go with OpenGL to see my future in Game Development industry? [closed]

    - by Priyank
    Possible Duplicate: Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry? I tried Google but found quite old articles, so I am in search of an answer in context to year 2012. Hi all, I don't know if you will consider this question appropriate for this community but I am constantly searching for a perfect answer. What I have seen is that most of the games that are released these days are DirectX 1x based. Except for few games like Starcraft or Diablo which don't have high end graphics are using OpenGL. So I have few questions to ask. The platforms i would like to target are PC (windows), Xbox 360 and PS3 (must). Should I go with learning OpenGL to see my future in game development industry? Or should I shift to Directx? If I learn OpenGL first, will it be difficult to learn direcx then? Which API is most suitable for indie development? Which one of the two API's are better from coder's (programmer's) point of view? Like OOP and style of coding. Is openGL being cross platform should be the only reason to choose it over Directx? Even when vendors are not providing enough stable drivers for it. Thanks in advance. I have read this post, but I have few questions. Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry?

    Read the article

  • Did Oracle make public any plans to charge for JDK in the near future? [closed]

    - by Eduard Florinescu
    I recently read an article: Twelve Disaster Scenarios Which Could Damage the Technology Industry which mentioned among other the possible "disaster scenarios" also: Oracle starts charging for the JDK, giving the following as argument: Oracle could start requiring license fees for the JDK from everyone but desktop users who haven't uninstalled the Java plug-in for some reason. This would burn down half the Java server-side market, but allow Oracle to fully monetize its acquisitions and investments. [...] Oracle tends to destroy markets to create products it can fully monetize. Even if you're not a Java developer, this would have a ripple effect throughout the market. [...] I actually haven't figured out why Larry hasn't decided Java should go this route yet. Some version of this scenario is actually in my company's statement of risks. I know guessing for the future is impossible, and speculating about that would be endless so I will try to frame my question in an objective answarable way: Did Oracle or someone from Oracle under anonymity, make public, or hinted, leaked to the public such a possibility or the above is plain journalistic speculation? I am unable to find the answer myself with Google generating a lot of noise by searching JDK.

    Read the article

  • What are some general guidelines for setting up an iOS project I will want to personally publish but sell in the future?

    - by RLH
    I have an idea for a personal iOS project that I would like to write and release to the iOS store. I'm the type of developer who enjoys developing and publishing. I want to write quality software and take care of my customers. Assuming that I wrote an application that had reasonable success, there is a fair chance that I would want to sell the ownership rights of the app to another party and I'd use the proceeds to develop my next personal project which, in turn, I'd probably want to sell in the future. With that said, what are some general guidelines for creating, making and publishing an iOS project that I will eventually want to transfer to another company/developer? I know this is a bit of a broad question, but I request that the given advice be a general list of tips, suggestions and pitfalls to avoid. If any particular bullet point on your list needs more explanation, I'll either search for the answer or post a new question specific to that requirement. Thank you! Note Regarding this Question I am posting this question on Programmers.SO because I think that this is an issue of software architecting, seeking advice for setting a new application project and publishing a project to the Apple iOS store-- all within the requirements for questions on this site.

    Read the article

  • Is it justified to use project-wide unique function and variable names to help future refactoring?

    - by kahoon
    Refactoring tools (like ReSharper) often can't be sure whether to rename a given identifier when, for example refactoring a JavaScript function. I guess this is a consequence of JavaScript's dynamic nature. ReSharper solves this problem by offering to rename reasonable lexical matches too. The developer can opt out of renaming certain functions, if he finds that the lexical match is purely accidental. This means that the developer has to approve every instance that will be affected by the renaming. For example let's consider we have two Backbone classes which are used completely independently from each other in our application: var Header = Backbone.View.extend({ close: function() {...} }) var Dialog = Backbone.View.extend({ close: function() {...} }) If you try to rename Dialog's close method to for example closeAndNotify, then ReSharper will offer to rename all occurences of Header's close method just because they are the same lexically prior to the renaming. To avoid this problem, please consider this: var Header = Backbone.View.extend({ closeHeader: function() {...} }) var Dialog = Backbone.View.extend({ closeDialog: function() {...} }) Now you can rename closeDialog unambiguously - given that you have no other classes having a method with the same name. Is it worth it to name my functions this way to help future refactoring?

    Read the article

  • Should I go back to college and graduate with a poor GPA or try to jump into an entry-level development position? [closed]

    - by jshin47
    I once attended a top-10 American university but I am currently not in school for several different reasons. Chief among them is that I did very poorly two semesters and even failed one of them (got two F's) which put me in automatic suspension. My major is not CS but math. I am in a pickle at the moment. After I was suspended I got a job at a niche IT company in the area. I am employed as something of an IT generalist; my primary responsibilities are Windows systems administration/networking but I also do some Android, iOS, and .NET development. I have released a few apps to the app store under my name and my company's name, and we have done work for a few big clients. I started working at my job about 1.5 years ago and I am somewhat happily employed but I do not see it as a long-term fit because it is a small company with little opportunity to advance. I would like to move out to California and particularly to the Bay Area to get a job at a more reputable or exciting company, even at a lower rate of pay, but I am not sure if I should do that or try to go back to school. If I went back to school, it would take 1-1.5 years to graduate and some $. Best case scenario I would graduate with a 2.9 or 3.0 GPA. It is a top-10 school, but that's a crappy GPA. If I do not go back to school, I will be a field where most people have degrees, without a degree. If anything goes wrong I could be really screwed as I feel I will get no respect without a degree. On the other hand I really would like to get started in the field and get more serious about developing good development practices, learning new languages/frameworks, and working with people who know a lot more than I so I can learn and grow as a developer and eventually do my own thing. Basically, I am wondering: Should I just go back to school? How much does the bad GPA / good school reputation weigh in? What about the fact that I am a Math major and not a CS major (have never taken a CS course)? Does my skill set as something of a generalist bode well for me finding work at a start up in the Bay Area? If not (2), should I hunker down and focus on producing a really good (or a few medicore) iOS apps? Android apps? etc... How would you look at someone who did great in HS, kind of goofed off in college and eventually quit, and got into development? Thanks for any thoughts or input.

    Read the article

  • If you tried to use Ubuntu then went back to your old OS, why did you do so?

    - by Michael Forrest
    Over the past few years I've been dipping in and out of Ubuntu every so often because I believe in the idea. However, there have always been factors that have made me give up and return to either Windows or OS X, intending to come back when Ubuntu has had a bit more time to 'bake'. Anecdotally, if you have had similar experiences, why did you go back? If we can address these sorts of issues, maybe Ubuntu can get over this hump. I mean 'chasm'.

    Read the article

  • Mysql - innoDb - is in the future! Current system log sequence number

    - by Ward Loockx
    I have this error in my syslog. I restored an older dump to solve this problem, after some reading. Now The errors in syslog are far less than before. But still a few times an hour I get Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96637 log sequence number 7 1223357717 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96638 log sequence number 8 150027924 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96639 log sequence number 7 4208567151 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Anybody that knows how I can track what database is causing this issue? And how to fix?

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Playing HTML5 Video with fall back for IE8/IE7 and earlier versions of other browsers using Silverlight

    - by Harish Ranganathan
    One of the popular HTML5 tags is the video tag.  The ability to play videos without depending on a plugin is something that excites web developers to a great extent and no wonder you end up seeing video demos in all HTML5 conferences. Now, coming to HTML5 Video, the tag itself is simply <video id=”ID” src=”FILENAME.mp4/ogv/webm” > in the simplest form.  This also means that the video needs to be H.256 encoded MP4 format or some of the other formats as mentioned above.  For a detailed specification on this, check this Wikipedia article HTML5 video is supported by all the modern browsers such as IE9 (currently in RC stage), Mozilla Firefox 4 and Chrome latest versions.  Here below is a simple example of a HTML5 video tag and the screen shot of how it looks like in IE9 RC <!DOCTYPE html> <head></head> <body> <h1>This is a sample of an HTML5 Video</h1> <video src="video.mp4" id="myvideo">Your browser doesn’t support this currently</video> </body> </html> You can add attributes to the video tag such as “autoplay” which will automatically start playing the video.  Also, you can specify “poster” to display an initial picture before the video starts playing etc., but I am not going into those for now. This would play well in the modern browsers as mentioned above.  However, if the end users are viewing this page from an earlier version of browsers such as IE8/IE7 or IE6, this video wouldn’t play.  Whatever text that is specified between the video tags, would just show up. Note: for demo purposes, I went to the IE9 developer toolbar and chose IE8 as Browser Mode to exhibit this legacy behaviour.  However, in the interest of serving the larger community of users who visit the site, we would like to have a fall back mechanism for playing videos on older version of browsers. Now, Silverlight is supported in IE6/7 & 8 and other browsers too.  If we can have the same video encoded for Silverlight, we can put the fall-back code, as follows:- <video src="videos/video.mp4" id="myvideo">     <object height="252" type="application/x-silverlight-2" width="448">         <param name="source" value="resources/player.xap">         <param name="initParams" value="deferredLoad=true, duration=0, m=http://localhost/DemoSite/videos/video.mp4, autostart=false, autohide=true, showembed=true, postid=0" />         <param name="background" value="#00FFFFFF" />     </object> </video>   Note, this sample uses a Silverlight XAP file with the same video and uses the object tag to embed it instead of the HTML5 video tag. So, when I now run this sample and switch to IE8 (using the IE9 Developer toolbar’s Browser Mode), I get and when clicking on the “Play” icon, Note, there are multiple ways to play videos in Silverlight and this is one of the ways.  For a complete list of Silverlight samples, visit http://www.silverlight.net/learn/  Also, we can use Flash to play video in the fall-back mechanism as well. Thus, we can create a fall-back mechanism for playing HTML5 videos for the older browsers and hence ensure that the end users get to experience the same. Cheers !!!

    Read the article

  • How to store generated eigen faces for future face recognition?

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong? 2.Instead of running the training set everytime how eigen faces generated are stored so that stored eigen faces are used for future face recoginition for a new input image.So it reduces wastage of time.

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Android: How to restore List data when pressing the "back" button?

    - by Rob
    Hi there, My question is about restoring complex activity related data when coming back to the activity using the "back" button". Activity A has a ListView which is connected to ArrayAdapter serving as its data source - this happens in onCreate(). By default, if I move to activity B and press "back" to get back to activity A, does my list stay intact with all the data or do I just get visual "copy" of the screen but the data is lost? What can I do when more than activities are involved? Let's say activity A starts activity B which starts activity C and then I press "back" twice to get to A. How do I ensure the integrity of the A's data when it gets back to the foreground? PrefsManager does not seem to handle complex object very intuitively. Thanks, Rob

    Read the article

  • Can I send a variable to paypal, and have it post it back to me when payment completes?

    - by Yegor
    Ive been using express checkout API to convert people's accounts on my site to premium accounts after paying. The only problem with it is that it doesn't send the user back to the site until they click the button to return, and it updates their permission when that happens. About 40% of the users don't seem to do that.... so their accounts never get credited after payment. Although paypal does an instant post-back upon the successful payment, I was never able to make it actually update the user's account right away, since I cant get it to send back some sort of informational that would identify the user that just completed the payment. I could only do that when you are sent back to the site, which sends the transaction ID, that I logged with a post-back. It searches for it, and grants permission if it was found int he DB. Is there a way to submit some sort of a variable to paypal, that it will then post back to me? Something like &user_id=123, which would make it very handly to update the user's permission.

    Read the article

  • What are best practices when switching between projects/coming back to projects frequently?

    - by dj444
    The nature of my job is that I have to switch back and forth between projects every few weeks. I find that one of the biggest impediments to my productivity is the ramp-up time to getting all the relevant pieces of code "back in my head" again after not seeing it for a period. This happens to a smaller and larger extent for briefer breaks / longer breaks. Obviously, good design, documentation, commenting, and physical structure all help with this (not to mention switching between projects as infrequently as possible). But I'm wondering if there are practices/tools that I may be missing out on. What are your specific practices for improving on this?

    Read the article

  • How should I incorporate a hotfix back into a feature branch using gitflow?

    - by Mark Trapp
    I've started using gitflow for a project, and I have an outstanding feature branch as well as a newly created hotfix. Per the gitflow workflow, the hotfix gets applied to both the master and develop branches, but nothing is said or done about extant feature branches. Nevertheless, I'd like to incorporate the hotfix changes back into my feature branch, which as near as I can tell leaves three options: Don't incorporate the changes. If the changes were needed for the feature branch, it should've been part of the feature branch. Merge develop back into the feature branch. This seems to follow the gitflow workflow the best, but would cause out-of-order commits. Rebase the feature branch onto develop. This would preserve commit order but rebasing seems to be completely absent from the general gitflow workflow. What's the best practice here?

    Read the article

  • How to get windows back? Lubuntu?

    - by Jon
    I installed lubuntu and I don't like it. I did a clean install to lubuntu rather than making it dual. Now when I want to go back to Windows it won't work. I put the Windows Vista CD in and it won't load. It just loads lubuntu back. I know for a fact that the CD works just fine. I don't know what to do? I did a little research and it says I deleted the windows loader or the partition. Can someone help me?

    Read the article

  • Is it possible to get the old workspaceswitcher back in the top-panel?

    - by Arne
    I just updated my Ubuntu from 10.04 to 12.04 yesterday and so far I'm very satisfied with it. There is only one thing which is still missing to get back just the same comfort as I had with the previous version. I work on 4 workspaces and would like to have the ability to switch from one to another by just one click. With Gnome there was the possibility to add the workspace switcher to the top-panel next to the system tray and I could easily click on just the workspace I wanted to switch to. Of course there I could switch by using the keyboard shortcuts, but I would like to use the mouse in addition. I don't like using the switcher on the left side because it's much slower. Is it possible to get the old workplace switcher back in the panel at the top?

    Read the article

  • How do I get my folders back on my desktop?

    - by Christopher Allen
    Using the 12.04 version. This morning the Update Manager popped up, I clicked it and it said some items will be un-installed and some will be upgraded and new ones will be installed. I did it and restarted. On restart, I entered my password about 5 times and it said it was incorrect. I changed the shell from Gnome to Ubuntu and logged on then changed it back to Gnome and I was able to log on. However, all my folders on the screen are gone but they appear under Desktop in the Home folder. How do I get then back on the screen?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >