Search Results

Search found 55281 results on 2212 pages for 'get set'.

Page 271/2212 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Entity Framework RC1 DbContext query issue

    - by Steve
    I'm trying to implement the repository pattern using entity framework code first rc 1. The problem I am running into is with creating the DbContext. I have an ioc container resolving the IRepository and it has a contextprovider which just news up a new DbContext with a connection string in a windsor.config file. With linq2sql this part was no problem but EF seems to be choking. I'll describe the problem below with an example. I've pulled out the code to simplify things a bit so that is why you don't see any repository pattern stuff here. just sorta what is happening without all the extra code and classes. using (var context = new PlssContext()) { var x = context.Set<User>(); var y = x.Where(u => u.UserName == LogOnModel.UserName).FirstOrDefault(); } using (var context2 = new DbContext(@"Data Source=.\SQLEXPRESS;Initial Catalog=PLSS.Models.PlssContext;Integrated Security=True;MultipleActiveResultSets=True")) { var x = context2.Set<User>(); var y = x.Where(u => u.UserName == LogOnModel.UserName).FirstOrDefault(); } PlssContext is where I am creating my DbContext class. The repository pattern doesn't know anything about PlssContext. The best I thought I could do was create a DbContext with the connection string to the sqlexpress database and query the data that way. The connection string in the var context2 was grabbed from the context after newing up the PlssContext object. So they are pointing at the same sqlexpress database. The first query works. The second query fails miserably with this error: The model backing the 'DbContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data. on this line var y = x.Where(u => u.UserName == LogOnModel.UserName).FirstOrDefault(); Here is my DbContext namespace PLSS.Models { public class PlssContext : DbContext { public DbSet<User> Users { get; set; } public DbSet<Corner> Corners { get; set; } public DbSet<Lookup_County> Lookup_County { get; set; } public DbSet<Lookup_Accuracy> Lookup_Accuracy { get; set; } public DbSet<Lookup_MonumentStatus> Lookup_MonumentStatus { get; set; } public DbSet<Lookup_CoordinateSystem> Lookup_CoordinateSystem { get; set; } public class Initializer : DropCreateDatabaseAlways<PlssContext> { protected override void Seed(PlssContext context) { I've tried all of the Initializer strategies with the same errors. I don't think the database is changing. If I remove the modelBuilder.Conventions.Remove<IncludeMetadataConvention>(); Then the error returns is The entity type User is not part of the model for the current context. Which sort of makes sense. But how do you bring this all together?

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • More elegant way to make a C++ member function change different member variables based on template p

    - by Eric Moyer
    Today, I wrote some code that needed to add elements to different container variables depending on the type of a template parameter. I solved it by writing a friend helper class specialized on its own template parameter which had a member variable of the original class. It saved me a few hundred lines of repeating myself without adding much complexity. However, it seemed kludgey. I would like to know if there is a better, more elegant way. The code below is a greatly simplified example illustrating the problem and my solution. It compiles in g++. #include <vector> #include <algorithm> #include <iostream> namespace myNS{ template<class Elt> struct Container{ std::vector<Elt> contents; template<class Iter> void set(Iter begin, Iter end){ contents.erase(contents.begin(), contents.end()); std::copy(begin, end, back_inserter(contents)); } }; struct User; namespace WkNS{ template<class Elt> struct Worker{ User& u; Worker(User& u):u(u){} template<class Iter> void set(Iter begin, Iter end); }; }; struct F{ int x; explicit F(int x):x(x){} }; struct G{ double x; explicit G(double x):x(x){} }; struct User{ Container<F> a; Container<G> b; template<class Elt> void doIt(Elt x, Elt y){ std::vector<Elt> v; v.push_back(x); v.push_back(y); Worker<Elt>(*this).set(v.begin(), v.end()); } }; namespace WkNS{ template<class Elt> template<class Iter> void Worker<Elt>::set(Iter begin, Iter end){ std::cout << "Set a." << std::endl; u.a.set(begin, end); } template<> template<class Iter> void Worker<G>::set(Iter begin, Iter end){ std::cout << "Set b." << std::endl; u.b.set(begin, end); } }; }; int main(){ using myNS::F; using myNS::G; myNS::User u; u.doIt(F(1),F(2)); u.doIt(G(3),G(4)); } User is the class I was writing. Worker is my helper class. I have it in its own namespace because I don't want it causing trouble outside myNS. Container is a container class whose definition I don't want to modify, but is used by User in its instance variables. doIt<F> should modify a. doIt<G> should modify b. F and G are open to limited modification if that would produce a more elegant solution. (As an example of one such modification, in the real application F's constructor takes a dummy parameter to make it look like G's constructor and save me from repeating myself.) In the real code, Worker is a friend of User and member variables are private. To make the example simpler to write, I made everything public. However, a solution that requires things to be public really doesn't answer my question. Given all these caveats, is there a better way to write User::doIt?

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • How do I make my applet turn the user's input into an integer and compare it to the computer's random number?

    - by Kitteran
    I'm in beginning programming and I don't fully understand applets yet. However, (with some help from internet tutorials) I was able to create an applet that plays a game of guess with the user. The applet compiles fine, but when it runs, this error message appears: "Exception in thread "main" java.lang.NumberFormatException: For input string: "" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:470) at java.lang.Integer.parseInt(Integer.java:499) at Guess.createUserInterface(Guess.java:101) at Guess.<init>(Guess.java:31) at Guess.main(Guess.java:129)" I've tried moving the "userguess = Integer.parseInt( t1.getText() );" on line 101 to multiple places, but I still get the same error. Can anyone tell me what I'm doing wrong? The Code: // Creates the game GUI. import javax.swing.*; import java.awt.*; import java.awt.event.*; public class Guess extends JFrame{ private JLabel userinputJLabel; private JLabel lowerboundsJLabel; private JLabel upperboundsJLabel; private JLabel computertalkJLabel; private JButton guessJButton; private JPanel guessJPanel; static int computernum; int userguess; static void declare() { computernum = (int) (100 * Math.random()) + 1; //random number picked (1-100) } // no-argument constructor public Guess() { createUserInterface(); } // create and position GUI components private void createUserInterface() { // get content pane and set its layout Container contentPane = getContentPane(); contentPane.setLayout( null ); contentPane.setBackground( Color.white ); // set up userinputJLabel userinputJLabel = new JLabel(); userinputJLabel.setText( "Enter Guess Here -->" ); userinputJLabel.setBounds( 0, 65, 120, 50 ); userinputJLabel.setHorizontalAlignment( JLabel.CENTER ); userinputJLabel.setBackground( Color.white ); userinputJLabel.setOpaque( true ); contentPane.add( userinputJLabel ); // set up lowerboundsJLabel lowerboundsJLabel = new JLabel(); lowerboundsJLabel.setText( "Lower Bounds Of Guess = 1" ); lowerboundsJLabel.setBounds( 0, 0, 170, 50 ); lowerboundsJLabel.setHorizontalAlignment( JLabel.CENTER ); lowerboundsJLabel.setBackground( Color.white ); lowerboundsJLabel.setOpaque( true ); contentPane.add( lowerboundsJLabel ); // set up upperboundsJLabel upperboundsJLabel = new JLabel(); upperboundsJLabel.setText( "Upper Bounds Of Guess = 100" ); upperboundsJLabel.setBounds( 250, 0, 170, 50 ); upperboundsJLabel.setHorizontalAlignment( JLabel.CENTER ); upperboundsJLabel.setBackground( Color.white ); upperboundsJLabel.setOpaque( true ); contentPane.add( upperboundsJLabel ); // set up computertalkJLabel computertalkJLabel = new JLabel(); computertalkJLabel.setText( "Computer Says:" ); computertalkJLabel.setBounds( 0, 130, 100, 50 ); //format (x, y, width, height) computertalkJLabel.setHorizontalAlignment( JLabel.CENTER ); computertalkJLabel.setBackground( Color.white ); computertalkJLabel.setOpaque( true ); contentPane.add( computertalkJLabel ); //Set up guess jbutton guessJButton = new JButton(); guessJButton.setText( "Enter" ); guessJButton.setBounds( 250, 78, 100, 30 ); contentPane.add( guessJButton ); guessJButton.addActionListener( new ActionListener() // anonymous inner class { // event handler called when Guess button is pressed public void actionPerformed( ActionEvent event ) { guessActionPerformed( event ); } } // end anonymous inner class ); // end call to addActionListener // set properties of application's window setTitle( "Guess Game" ); // set title bar text setSize( 500, 500 ); // set window size setVisible( true ); // display window //create text field TextField t1 = new TextField(); // Blank text field for user input t1.setBounds( 135, 78, 100, 30 ); contentPane.add( t1 ); userguess = Integer.parseInt( t1.getText() ); //create section for computertalk Label computertalkLabel = new Label(""); computertalkLabel.setBounds( 115, 130, 300, 50); contentPane.add( computertalkLabel ); } // Display computer reactions to user guess private void guessActionPerformed( ActionEvent event ) { if (userguess > computernum) //if statements (computer's reactions to user guess) computertalkJLabel.setText( "Computer Says: Too High" ); else if (userguess < computernum) computertalkJLabel.setText( "Computer Says: Too Low" ); else if (userguess == computernum) computertalkJLabel.setText( "Computer Says:You Win!" ); else computertalkJLabel.setText( "Computer Says: Error" ); } // end method oneJButtonActionPerformed // end method createUserInterface // main method public static void main( String args[] ) { Guess application = new Guess(); application.setDefaultCloseOperation (JFrame.EXIT_ON_CLOSE); } // end method main } // end class Phone

    Read the article

  • NoClassDefFoundError and Netty

    - by Dmytro Leonenko
    Hi. First to say I'm n00b in Java. I can understand most concepts but in my situation I want somebody to help me. I'm using JBoss Netty to handle simple http request and using MemCachedClient check existence of client ip in memcached. import org.jboss.netty.channel.ChannelHandler; import static org.jboss.netty.handler.codec.http.HttpHeaders.*; import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*; import static org.jboss.netty.handler.codec.http.HttpResponseStatus.*; import static org.jboss.netty.handler.codec.http.HttpVersion.*; import com.danga.MemCached.*; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelUpstreamHandler; import org.jboss.netty.handler.codec.http.Cookie; import org.jboss.netty.handler.codec.http.CookieDecoder; import org.jboss.netty.handler.codec.http.CookieEncoder; import org.jboss.netty.handler.codec.http.DefaultHttpResponse; import org.jboss.netty.handler.codec.http.HttpChunk; import org.jboss.netty.handler.codec.http.HttpChunkTrailer; import org.jboss.netty.handler.codec.http.HttpRequest; import org.jboss.netty.handler.codec.http.HttpResponse; import org.jboss.netty.handler.codec.http.HttpResponseStatus; import org.jboss.netty.handler.codec.http.QueryStringDecoder; import org.jboss.netty.util.CharsetUtil; /** * @author <a href="http://www.jboss.org/netty/">The Netty Project</a> * @author Andy Taylor ([email protected]) * @author <a href="http://gleamynode.net/">Trustin Lee</a> * * @version $Rev: 2368 $, $Date: 2010-10-18 17:19:03 +0900 (Mon, 18 Oct 2010) $ */ @SuppressWarnings({"ALL"}) public class HttpRequestHandler extends SimpleChannelUpstreamHandler { private HttpRequest request; private boolean readingChunks; /** Buffer that stores the response content */ private final StringBuilder buf = new StringBuilder(); protected MemCachedClient mcc = new MemCachedClient(); private static SockIOPool poolInstance = null; static { // server list and weights String[] servers = { "lcalhost:11211" }; //Integer[] weights = { 3, 3, 2 }; Integer[] weights = {1}; // grab an instance of our connection pool SockIOPool pool = SockIOPool.getInstance(); // set the servers and the weights pool.setServers(servers); pool.setWeights(weights); // set some basic pool settings // 5 initial, 5 min, and 250 max conns // and set the max idle time for a conn // to 6 hours pool.setInitConn(5); pool.setMinConn(5); pool.setMaxConn(250); pool.setMaxIdle(21600000); //1000 * 60 * 60 * 6 // set the sleep for the maint thread // it will wake up every x seconds and // maintain the pool size pool.setMaintSleep(30); // set some TCP settings // disable nagle // set the read timeout to 3 secs // and don't set a connect timeout pool.setNagle(false); pool.setSocketTO(3000); pool.setSocketConnectTO(0); // initialize the connection pool pool.initialize(); // lets set some compression on for the client // compress anything larger than 64k //mcc.setCompressEnable(true); //mcc.setCompressThreshold(64 * 1024); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { HttpRequest request = this.request = (HttpRequest) e.getMessage(); if(mcc.get(request.getHeader("X-Real-Ip")) != null) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setHeader("X-Accel-Redirect", request.getUri()); ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } else { sendError(ctx, NOT_FOUND); } } private void writeResponse(MessageEvent e) { // Decide whether to close the connection or not. boolean keepAlive = isKeepAlive(request); // Build the response object. HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setContent(ChannelBuffers.copiedBuffer(buf.toString(), CharsetUtil.UTF_8)); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); if (keepAlive) { // Add 'Content-Length' header only for a keep-alive connection. response.setHeader(CONTENT_LENGTH, response.getContent().readableBytes()); } // Encode the cookie. String cookieString = request.getHeader(COOKIE); if (cookieString != null) { CookieDecoder cookieDecoder = new CookieDecoder(); Set<Cookie> cookies = cookieDecoder.decode(cookieString); if(!cookies.isEmpty()) { // Reset the cookies if necessary. CookieEncoder cookieEncoder = new CookieEncoder(true); for (Cookie cookie : cookies) { cookieEncoder.addCookie(cookie); } response.addHeader(SET_COOKIE, cookieEncoder.encode()); } } // Write the response. ChannelFuture future = e.getChannel().write(response); // Close the non-keep-alive connection after the write operation is done. if (!keepAlive) { future.addListener(ChannelFutureListener.CLOSE); } } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { e.getCause().printStackTrace(); e.getChannel().close(); } private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); response.setContent(ChannelBuffers.copiedBuffer( "Failure: " + status.toString() + "\r\n", CharsetUtil.UTF_8)); // Close the connection as soon as the error message is sent. ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } } When I try to send request like http://127.0.0.1:8090/1/2/3 I'm getting java.lang.NoClassDefFoundError: com/danga/MemCached/MemCachedClient at httpClientValidator.server.HttpRequestHandler.<clinit>(HttpRequestHandler.java:66) I believe it's not related to classpath. May be it's related to context in which mcc doesn't exist. Any help appreciated EDIT: Original code http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/http/snoop/package-summary.html I've modified some parts to fit my needs.

    Read the article

  • 47 memory leaks. STL pointers.

    - by icelated
    I have a major amount of memory leaks. I know that the Sets have pointers and i cannot change that! I cannot change anything, but clean up the mess i have... I am creating memory with new in just about every function to add information to the sets. I have a Cd/ DVD/book: super classes of ITEM class and a library class.. In the library class i have 2 functions for cleaning up the sets.. Also, the CD, DVD, book destructors are not being called.. here is my potential leaks.. library.h #pragma once #include <ostream> #include <map> #include <set> #include <string> #include "Item.h" using namespace std; typedef set<Item*> ItemSet; typedef map<string,Item*> ItemMap; typedef map<string,ItemSet*> ItemSetMap; class Library { public: // general functions void addKeywordForItem(const Item* const item, const string& keyword); const ItemSet* itemsForKeyword(const string& keyword) const; void printItem(ostream& out, const Item* const item) const; // book-related functions const Item* addBook(const string& title, const string& author, int const nPages); const ItemSet* booksByAuthor(const string& author) const; const ItemSet* books() const; // music-related functions const Item* addMusicCD(const string& title, const string& band, const int nSongs); void addBandMember(const Item* const musicCD, const string& member); const ItemSet* musicByBand(const string& band) const; const ItemSet* musicByMusician(const string& musician) const; const ItemSet* musicCDs() const; // movie-related functions const Item* addMovieDVD(const string& title, const string& director, const int nScenes); void addCastMember(const Item* const movie, const string& member); const ItemSet* moviesByDirector(const string& director) const; const ItemSet* moviesByActor(const string& actor) const; const ItemSet* movies() const; ~Library(); void Purge(ItemSet &set); void Purge(ItemSetMap &map); }; here is some functions for adding info using new in library. Keep in mind i am cutting out alot of code to keep this post short. library.cpp #include "Library.h" #include "book.h" #include "cd.h" #include "dvd.h" #include <iostream> // general functions ItemSet allBooks; ItemSet allCDS; ItemSet allDVDs; ItemSetMap allBooksByAuthor; ItemSetMap allmoviesByDirector; ItemSetMap allmoviesByActor; ItemSetMap allMusicByBand; ItemSetMap allMusicByMusician; const ItemSet* Library::itemsForKeyword(const string& keyword) const { const StringSet* kw; ItemSet* obj = new ItemSet(); return obj; const Item* Library::addBook(const string& title, const string& author, const int nPages) { ItemSet* obj = new ItemSet(); Book* item = new Book(title,author,nPages); allBooks.insert(item); // add to set of all books obj->insert(item); return item; const Item* Library::addMusicCD(const string& title, const string& band, const int nSongs) { ItemSet* obj = new ItemSet(); CD* item = new CD(title,band,nSongs); return item; void Library::addBandMember(const Item* musicCD, const string& member) { ItemSet* obj = new ItemSet(); (((CD*) musicCD)->addBandMember(member)); obj->insert((CD*) musicCD); here is the library destructor..... Library::~Library() { Purge(allBooks); Purge(allCDS); Purge(allDVDs); Purge(allBooksByAuthor); Purge(allmoviesByDirector); Purge(allmoviesByActor); Purge(allMusicByBand); Purge(allMusicByMusician); } void Library::Purge(ItemSet &set) { for (ItemSet::iterator it = set.begin(); it != set.end(); ++it) delete *it; set.clear(); } void Library::Purge(ItemSetMap &map) { for (ItemSetMap::iterator it = map.begin(); it != map.end(); ++it) delete it->second; map.clear(); } so, basically item, cd, dvd class all have a set like this: typedef set<string> StringSet; class CD : public Item StringSet* music; and i am deleting it like: but those superclasses are not being called.. Item destructor is. CD::~CD() { delete music; } Do, i need a copy constructor? and how do i delete those objects i am creating in the library class? and how can i get the cd,dvd, destructor called? would the addbandmember function located in the library.cpp cause me to have a copy constructor? Any real help you can provide me to help me clean up this mess instead of telling me not to use pointers in my sets i would really appreciate. How can i delete the memory i am creating in those functions? I cannot delete them in the function!!

    Read the article

  • A "Trig" Calculating Class

    - by Clinton Scott
    I have been trying to create a gui that calculates trigonometric functions based off of the user's input. I have had success in the GUI part, but my class that I wrote to hold information using inheritance seems to be messed up, because when I run it gives an error saying: Exception in thread "main" java.lang.RuntimeException: Uncompilable source code - constructor ArcTrigCalcCon in class TrigCalc.ArcTrigCalcCon cannot be applied to given types; required: double,double,double,double,double,double found: java.lang.Double,java.lang.Double,java.lang.Double reason: actual and formal argument lists differ in length at TrigCalc.TrigCalcGUI.(TrigCalcGUI.java:31) at TrigCalc.TrigCalcGUI.main(TrigCalcGUI.java:87) Java Result: 1 and says it is the object causing the problem. Below Will be my code. First I will put up my inheritance class with cosecant secant and cotangent and then my original class with the original 3 trig functions: { public ArcTrigCalcCon(double s, double cs, double t, double csc, double sc, double ct) { // Inherit from the Trig Calc class super(s, cs, t); cosecant = 1/s; secant = 1/cs; cotangent = 1/t; } public void setCsc(double csc) { cosecant = csc; } public void setSec(double sc) { secant = sc; } public void setCot(double ct) { cotangent = ct; } } Here is the first Trigonometric class: public class TrigCalcCon { public double sine; public double cosine; public double tangent; public TrigCalcCon(double s, double cs, double t) { sine = s; cosine = cs; tangent = t; } public void setSin(double s) { sine = s; } public void setCos(double cs) { cosine = cs; } public void setTan(double t) { tangent = t; } public void set(double s, double cs, double t) { sine = s; cosine = cs; tangent = t; } public double getSin() { return Math.sin(sine); } public double getCos() { return Math.cos(cosine); } public double getTan() { return Math.tan(tangent); } } and here is the demo class to run the gui: public class TrigCalcGUI extends JFrame implements ActionListener { // Instance Variables private String input; private Double s, cs, t, csc, sc, ct; private JPanel mainPanel, sinPanel, cosPanel, tanPanel, cscPanel, secPanel, cotPanel, buttonPanel, inputPanel, displayPanel; // Panel Display private JLabel sinLabel, cosLabel, tanLabel, secLabel, cscLabel, cotLabel, inputLabel; private JTextField sinTF, cosTF, tanTF, secTF, cscTF, cotTF, inputTF; //Text Fields for sin, cos, and tan, and inverse private JButton calcButton, clearButton; // Calculate and Exit Buttons // Object ArcTrigCalcCon trC = new ArcTrigCalcCon(s, cs, t); public TrigCalcGUI() { // title bar text. super("Trig Calculator"); // Corner exit button action. setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // Create main panel to add each panel to mainPanel = new JPanel(); mainPanel.setLayout(new GridLayout(3,2)); displayPanel = new JPanel(); displayPanel.setLayout(new GridLayout(3,2)); // Assign Panel to each variable inputPanel = new JPanel(); sinPanel = new JPanel(); cosPanel = new JPanel(); tanPanel = new JPanel(); cscPanel = new JPanel(); secPanel = new JPanel(); cotPanel = new JPanel(); buttonPanel = new JPanel(); // Call each constructor buildInputPanel(); buildSinCosTanPanels(); buildCscSecCotPanels(); buildButtonPanel(); // Add each panel to content pane displayPanel.add(sinPanel); displayPanel.add(cscPanel); displayPanel.add(cosPanel); displayPanel.add(secPanel); displayPanel.add(tanPanel); displayPanel.add(cotPanel); // Add three content panes to GUI mainPanel.add(inputPanel, BorderLayout.NORTH); mainPanel.add(displayPanel, BorderLayout.CENTER); mainPanel.add(buttonPanel, BorderLayout.SOUTH); //add mainPanel this.add(mainPanel); // size of window to content this.pack(); // display window setVisible(true); } public static void main(String[] args) { new TrigCalcGUI(); } private void buildInputPanel() { inputLabel = new JLabel("Enter a Value: "); inputTF = new JTextField(5); inputPanel.add(inputLabel); inputPanel.add(inputTF); } // Building Constructor for sinPanel cosPanel, and tanPanel private void buildSinCosTanPanels() { // Set layout and border for sinPanel sinPanel.setLayout(new GridLayout(2,2)); sinPanel.setBorder(BorderFactory.createTitledBorder("Sine")); // sinTF = new JTextField(5); sinTF.setEditable(false); sinPanel.add(sinTF); // Set layout and border for cosPanel cosPanel.setLayout(new GridLayout(2,2)); cosPanel.setBorder(BorderFactory.createTitledBorder("Cosine")); cosTF = new JTextField(5); cosTF.setEditable(false); cosPanel.add(cosTF); // Set layout and border for tanPanel tanPanel.setLayout(new GridLayout(2,2)); tanPanel.setBorder(BorderFactory.createTitledBorder("Tangent")); tanTF = new JTextField(5); tanTF.setEditable(false); tanPanel.add(tanTF); } // Building Constructor for cscPanel secPanel, and cotPanel private void buildCscSecCotPanels() { // Set layout and border for cscPanel cscPanel.setLayout(new GridLayout(2,2)); cscPanel.setBorder(BorderFactory.createTitledBorder("Cosecant")); // cscTF = new JTextField(5); cscTF.setEditable(false); cscPanel.add(cscTF); // Set layout and border for secPanel secPanel.setLayout(new GridLayout(2,2)); secPanel.setBorder(BorderFactory.createTitledBorder("Secant")); secTF = new JTextField(5); secTF.setEditable(false); secPanel.add(secTF); // Set layout and border for cotPanel cotPanel.setLayout(new GridLayout(2,2)); cotPanel.setBorder(BorderFactory.createTitledBorder("Cotangent")); cotTF = new JTextField(5); cotTF.setEditable(false); cotPanel.add(cotTF); } private void buildButtonPanel() { // Create buttons and add events calcButton = new JButton("Calculate"); calcButton.addActionListener(new CalcButtonListener()); clearButton = new JButton("Clear"); clearButton.addActionListener(new ClearButtonListener()); buttonPanel.add(calcButton); buttonPanel.add(clearButton); } @Override public void actionPerformed(ActionEvent e) { } private class CalcButtonListener implements ActionListener { public void actionPerformed(ActionEvent ae) { // Declare boolean variable boolean incorrect = true; // Set input variable to input text field text input = inputTF.getText(); ImageIcon newIcon; ImageIcon frowny = new ImageIcon(TrigCalcGUI.class.getResource("/Sad_Face.png")); Image gm = frowny.getImage(); Image newFrowny = gm.getScaledInstance(100, 100, java.awt.Image.SCALE_FAST); newIcon = new ImageIcon(newFrowny); // If boolean is true, throw exception if(incorrect) { try{Double.parseDouble(input); incorrect = false;} catch(NumberFormatException nfe) { String s = "Invalid Input " + "/n Input Must Be a Numerical value." + "/nPlease Press Ok and Try Again"; JOptionPane.showMessageDialog(null, s, "Invalid", JOptionPane.ERROR_MESSAGE, newIcon); inputTF.setText(""); inputTF.requestFocus(); } } // If boolean is not true, proceed with output if (incorrect != true) { /* Set each text field's output to the String double value * of inputTF */ sinTF.setText(input); cosTF.setText(input); tanTF.setText(input); cscTF.setText(input); secTF.setText(input); cotTF.setText(input); } } } /** * Private inner class that handles the event when * the user clicks the Exit button. */ private class ClearButtonListener implements ActionListener { public void actionPerformed(ActionEvent ae) { // Clear field sinTF.setText(""); cosTF.setText(""); tanTF.setText(""); cscTF.setText(""); secTF.setText(""); cotTF.setText(""); // Clear textfield and set cursor focus to field inputTF.setText(""); inputTF.requestFocus(); } } }

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Working with Silverlight DataGrid RowDetailsTemplate

    - by mohanbrij
    In this post I am going to show how we can use the Silverlight DataGrid RowDetails Template, Before I start I assume that you know basics of Silverlight and also know how you create a Silverlight Projects. I have started with the Silverlight Application, and kept all the default options before I created a Silverlight Project. After this I added a Silverlight DataGrid control to my MainForm.xaml page, using the DragDrop feature of Visual Studio IDE, this will help me to add the default namespace and references automatically. Just to give you a quick look of what exactly I am going to do, I will show you in the screen below my final target, before I start explaining rest of my codes. Before I start with the real code, first I have to do some ground work, as I am not getting the data from the DB, so I am creating a class where I will populate the dummy data. EmployeeData.cs public class EmployeeData { public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public EmployeeData() { } public List<EmployeeData> GetEmployeeData() { List<EmployeeData> employees = new List<EmployeeData>(); employees.Add ( new EmployeeData { Address = "#407, PH1, Foyer Appartment", City = "Bangalore", Country = "India", FirstName = "Brij", LastName = "Mohan", State = "Karnataka" }); employees.Add ( new EmployeeData { Address = "#332, Dayal Niketan", City = "Jamshedpur", Country = "India", FirstName = "Arun", LastName = "Dayal", State = "Jharkhand" }); employees.Add ( new EmployeeData { Address = "#77, MSR Nagar", City = "Bangalore", Country = "India", FirstName = "Sunita", LastName = "Mohan", State = "Karnataka" }); return employees; } } The above class will give me some sample data, I think this will be good enough to start with the actual code. now I am giving below the XAML code from my MainForm.xaml First I will put the Silverlight DataGrid, <data:DataGrid x:Name="gridEmployee" CanUserReorderColumns="False" CanUserSortColumns="False" RowDetailsVisibilityMode="VisibleWhenSelected" HorizontalAlignment="Center" ScrollViewer.VerticalScrollBarVisibility="Auto" Height="200" AutoGenerateColumns="False" Width="350" VerticalAlignment="Center"> Here, the most important property which I am going to set is RowDetailsVisibilityMode="VisibleWhenSelected" This will display the RowDetails only when we select the desired Row. Other option we have in this is Collapsed and Visible. Which will either make the row details always Visible or Always Collapsed. but to get the real effect I have selected VisibleWhenSelected. Now I am going to put the rest of my XAML code. <data:DataGrid.Columns> <!--Begin FirstName Column--> <data:DataGridTextColumn Width="150" Header="First Name" Binding="{Binding FirstName}"/> <!--End FirstName Column--> <!--Begin LastName Column--> <data:DataGridTextColumn Width="150" Header="Last Name" Binding="{Binding LastName}"/> <!--End LastName Column--> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <!-- Begin row details section. --> <DataTemplate> <Border BorderBrush="Black" BorderThickness="1" Background="White"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="0.2*" /> <ColumnDefinition Width="0.8*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <!-- Controls are bound to FullAddress properties. --> <TextBlock Text="Address : " Grid.Column="0" Grid.Row="0" /> <TextBlock Text="{Binding Address}" Grid.Column="1" Grid.Row="0" /> <TextBlock Text="City : " Grid.Column="0" Grid.Row="1" /> <TextBlock Text="{Binding City}" Grid.Column="1" Grid.Row="1" /> <TextBlock Text="State : " Grid.Column="0" Grid.Row="2" /> <TextBlock Text="{Binding State}" Grid.Column="1" Grid.Row="2" /> <TextBlock Text="Country : " Grid.Column="0" Grid.Row="3" /> <TextBlock Text="{Binding Country}" Grid.Column="1" Grid.Row="3" /> </Grid> </Border> </DataTemplate> <!-- End row details section. --> </data:DataGrid.RowDetailsTemplate>   In the code above, first I am declaring the simple dataGridTextColumn for FirstName and LastName, and after this I am creating the RowDetailTemplate, where we are just putting the code what we usually do to design the Grid. I mean nothing very much RowDetailTemplate Specific, most of the code which you will see inside the RowDetailsTemplate is plain and simple, where I am binding rest of the Address Column. And that,s it. Once we will bind the DataGrid, you are ready to go. In the code below from MainForm.xaml.cs, I am just binding the DataGrid public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); BindControls(); } private void BindControls() { EmployeeData employees = new EmployeeData(); gridEmployee.ItemsSource = employees.GetEmployeeData(); } } Once you will run, you can see the output I have given in the screenshot above. In this example I have just shown the very basic example, now it up to your creativity and requirement, you can put some other controls like checkbox, Images, even other DataGrid, etc inside this RowDetailsTemplate column. I am attaching my sample source code with this post. I have used Silverlight 3 and Visual Studio 2008, but this is fully compatible with you Silverlight 4 and Visual Studio 2010. you may just need to Upgrade the attached Sample. You can download from here.

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Maintaining shared service in ASP.NET MVC Application

    - by kazimanzurrashid
    Depending on the application sometimes we have to maintain some shared service throughout our application. Let’s say you are developing a multi-blog supported blog engine where both the controller and view must know the currently visiting blog, it’s setting , user information and url generation service. In this post, I will show you how you can handle this kind of case in most convenient way. First, let see the most basic way, we can create our PostController in the following way: public class PostController : Controller { public PostController(dependencies...) { } public ActionResult Index(string blogName, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublished(blog.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCount(blog.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new IndexViewModel(urlResolver, user, blog, posts, count, page)); } public ActionResult Archive(string blogName, int? page, ArchiveDate archiveDate) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindArchived(blog.Id, archiveDate, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetArchivedCount(blog.Id, archiveDate); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new ArchiveViewModel(urlResolver, user, blog, posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } TagInfo tag = tagService.FindBySlug(blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(blog.Id, tag.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new TagViewModel(urlResolver, user, blog, posts, count, page, tag)); } } As you can see the above code heavily depends upon the current blog and the blog retrieval code is duplicated in all of the action methods, once the blog is retrieved the same blog is passed in the view model. Other than the blog the view also needs the current user and url resolver to render it properly. One way to remove the duplicate blog retrieval code is to create a custom model binder which converts the blog from a blog name and use the blog a parameter in the action methods instead of the string blog name, but it only helps the first half in the above scenario, the action methods still have to pass the blog, user and url resolver etc in the view model. Now lets try to improve the the above code, first lets create a new class which would contain the shared services, lets name it as BlogContext: public class BlogContext { public BlogInfo Blog { get; set; } public UserInfo User { get; set; } public IUrlResolver UrlResolver { get; set; } } Next, we will create an interface, IContextAwareService: public interface IContextAwareService { BlogContext Context { get; set; } } The idea is, whoever needs these shared services needs to implement this interface, in our case both the controller and the view model, now we will create an action filter which will be responsible for populating the context: public class PopulateBlogContextAttribute : FilterAttribute, IActionFilter { private static string blogNameRouteParameter = "blogName"; private readonly IBlogService blogService; private readonly IUserService userService; private readonly BlogContext context; public PopulateBlogContextAttribute(IBlogService blogService, IUserService userService, IUrlResolver urlResolver) { Invariant.IsNotNull(blogService, "blogService"); Invariant.IsNotNull(userService, "userService"); Invariant.IsNotNull(urlResolver, "urlResolver"); this.blogService = blogService; this.userService = userService; context = new BlogContext { UrlResolver = urlResolver }; } public static string BlogNameRouteParameter { [DebuggerStepThrough] get { return blogNameRouteParameter; } [DebuggerStepThrough] set { blogNameRouteParameter = value; } } public void OnActionExecuting(ActionExecutingContext filterContext) { string blogName = (string) filterContext.Controller.ValueProvider.GetValue(BlogNameRouteParameter).ConvertTo(typeof(string), Culture.Current); if (!string.IsNullOrWhiteSpace(blogName)) { context.Blog = blogService.FindByName(blogName); } if (context.Blog == null) { filterContext.Result = new NotFoundResult(); return; } if (filterContext.HttpContext.User.Identity.IsAuthenticated) { context.User = userService.FindByName(filterContext.HttpContext.User.Identity.Name); } IContextAwareService controller = filterContext.Controller as IContextAwareService; if (controller != null) { controller.Context = context; } } public void OnActionExecuted(ActionExecutedContext filterContext) { Invariant.IsNotNull(filterContext, "filterContext"); if ((filterContext.Exception == null) || filterContext.ExceptionHandled) { IContextAwareService model = filterContext.Controller.ViewData.Model as IContextAwareService; if (model != null) { model.Context = context; } } } } As you can see we are populating the context in the OnActionExecuting, which executes just before the controllers action methods executes, so by the time our action methods executes the context is already populated, next we are are assigning the same context in the view model in OnActionExecuted method which executes just after we set the  model and return the view in our action methods. Now, lets change the view models so that it implements this interface: public class IndexViewModel : IContextAwareService { // More Codes } public class ArchiveViewModel : IContextAwareService { // More Codes } public class TagViewModel : IContextAwareService { // More Codes } and the controller: public class PostController : Controller, IContextAwareService { public PostController(dependencies...) { } public BlogContext Context { get; set; } public ActionResult Index(int? page) { IEnumerable<PostInfo> posts = postService.FindPublished(Context.Blog.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCount(Context.Blog.Id); return View(new IndexViewModel(posts, count, page)); } public ActionResult Archive(int? page, ArchiveDate archiveDate) { IEnumerable<PostInfo> posts = postService.FindArchived(Context.Blog.Id, archiveDate, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetArchivedCount(Context.Blog.Id, archiveDate); return View(new ArchiveViewModel(posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { TagInfo tag = tagService.FindBySlug(Context.Blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(Context.Blog.Id, tag.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); return View(new TagViewModel(posts, count, page, tag)); } } Now, the last thing where we have to glue everything, I will be using the AspNetMvcExtensibility to register the action filter (as there is no better way to inject the dependencies in action filters). public class RegisterFilters : RegisterFiltersBase { private static readonly Type controllerType = typeof(Controller); private static readonly Type contextAwareType = typeof(IContextAwareService); protected override void Register(IFilterRegistry registry) { TypeCatalog controllers = new TypeCatalogBuilder() .Add(GetType().Assembly) .Include(type => controllerType.IsAssignableFrom(type) && contextAwareType.IsAssignableFrom(type)); registry.Register<PopulateBlogContextAttribute>(controllers); } } Thoughts and Comments?

    Read the article

  • Move penetrating OBB out of another OBB to resolve collision

    - by Milo
    I'm working on collision resolution for my game. I just need a good way to get an object out of another object if it gets stuck. In this case a car. Here is a typical scenario. The red car is in the green object. How do I correctly get it out so the car can slide along the edge of the object as it should. I tried: if(buildings.size() > 0) { Entity e = buildings.get(0); Vector2D vel = new Vector2D(); vel.x = vehicle.getVelocity().x; vel.y = vehicle.getVelocity().y; vel.normalize(); while(vehicle.getRect().overlaps(e.getRect())) { vehicle.setCenter(vehicle.getCenterX() - vel.x * 0.1f, vehicle.getCenterY() - vel.y * 0.1f); } colided = true; } But that does not work too well. Is there some sort of vector I could calculate to use as the vector to move the car away from the object? Thanks Here is my OBB2D class: public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } };

    Read the article

  • Finding the normal of OBB face with an OBB penetrating

    - by Milo
    Below is an illustration: I have an OBB in an OBB (see below for OBB2D code if needed). What I need to determine is, what face it is in, and what direction do I point the normal? The goal is to get the OBB out of the OBB so the normal needs to face outward of the OBB. How could I go about: Finding what face the line is penetrating given the 4 corners of the OBB and the class below: if we define dx=x2-x1 and dy=y2-y1, then the normals are (-dy, dx) and (dy, -dx). Which normal points outward of the OBB? Thanks public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } };

    Read the article

  • How to Easily Put a Windows PC into Kiosk Mode With Assigned Access

    - by Chris Hoffman
    Windows 8.1′s Assigned Access feature allows you to easily lock a Windows PC to a single application, such as a web browser. This feature makes it easy for anyone to configure Windows 8.1 devices as point-of-sale or other kiosk systems. In the past, setting up a Windows PC in kiosk mode involved much more work, requiring the use of third-party software, group policy, or Linux distributions designed around kiosk mode. Assigned Access is available on Windows 8.1 RT, Windows 8.1 Professional, and Windows 8.1 Enterprise. The standard edition of Windows 8.1 doesn’t support Assigned Access. Create a User Account for Assigned Access Rather than turn your entire computer into a locked-down kiosk system, Assigned Access allows you to create a separate user account that can only launch a single app — such as a web browser. To set this up, you must be logged into Windows as a user with administrator permissions. First, open the PC settings app — swipe in from the right or press Windows Key + C to open the charms bar, tap Settings, and tap Change PC settings. In the PC settings app, select Accounts and select Other accounts. Use the Add an account button to create a new Windows account. Select  the “Sign in without a Microsoft account” option and select Local account to create a local user account. You could also create a Microsoft account, but you may not want to do this if you just want a locked-down account with only browser access. If you need to install apps from the Windows Store to use in Assigned Access mode, you’ll have to set up a Microsoft account instead of a local account. A local account will still allow you access to the preinstalled apps, such as Internet Explorer. You may want to create a user account with a blank password. This would make it simple for anyone to access kiosk mode, even if the system becomes locked or needs to be rebooted. The account will be created as a standard user account with limited permissions. Leave it as a standard user account — don’t make it an administrator account. Set Up Assigned Access Once you’ve created an account, you’ll first need to sign into it. If you don’t, you’ll see a “This account has no apps” message when trying to enable Assigned Access. Go back to the welcome screen, log in to the new account you created, and allow Windows to go through the first-time account setup process. If you want to use a non-default app in kiosk mode, install it while logged in as that user account. Once you’re done, log out of the other account, log back in as your administrator account, and go back to the Other accounts screen. Click the Set up an account for assigned access option to continue. Select the user account you created and select the app you want to limit the account to. For a web-based kiosk, this can be a web browser such as the Modern version of Internet Explorer. Businesses can also create their own Modern apps and set them to run in kiosk mode in this way. Note that Microsoft’s documentation says “web browsers are not good choices for assigned access” because they require more permissions than average Modern (or “Windows Store”) apps. However, if you want to provide a kiosk for web-browsing, using Assigned Access is a much better option than using Guest Mode and offering up a full Windows desktop. When you’re done, restart your PC and log in as the Assigned Access account. Windows will automatically open the app you chose and won’t allow a user to leave that app. Standard Windows 8 features like the charms bar, app switcher, and Start screen won’t appear. Pressing the Windows key once will do nothing. To sign out of Assigned Access mode, press the Windows key five times — quickly — while signed in. You’ll be sent back to the standard login screen. The account will actually still be logged in and the app will remain running — this method just “locks” the screen and allows another user to log in. Automatically Log Into Assigned Access Whenever your Windows device boots, you can log into the Assigned Access account and turn it into a kiosk system. While this isn’t ideal for all kiosk systems, you may want the device to automatically launch the specific app when it boots without requiring any login process. To do so, you’ll just need to have Windows automatically log into the Assigned Access account when it boots. This option is hidden and not available in the standard Control Panel. You’ll need to use the hidden netplwiz Control Panel tool to set up automatic login on boot. If you didn’t create a password for the user account, leave the Password field empty while configuring this. Security Considerations If you’re using this feature to turn a Windows 8.1 system into a kiosk and leaving it open to the public, remember to consider security. Anyone could come up to the system, press the Windows key five times, and try to log into your standard administrator user account. Ensure the administrator user account has a strong password so people won’t be able to get past the kiosk system’s limitations and tamper with the system. Even Windows 8′s detractors have to admit that it’s an ideal system for a touch-screen kiosk device, running either a browser or another specific application. Assigned Access finally makes this easy to set up on Windows systems in the real world — no IT experience, third-party software, or Linux distributions necessary.     

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • How to store generated eigen faces for future face recognition?

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong? 2.Instead of running the training set everytime how eigen faces generated are stored so that stored eigen faces are used for future face recoginition for a new input image.So it reduces wastage of time.

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • ndd on Solaris 10

    - by user12620111
    This is mostly a repost of LaoTsao's Weblog with some tweaks. Last time that I tried to cut & paste directly off of his page, some of the XML was messed up. I run this from my MacBook. It should also work from your windows laptop if you use cygwin. ================If not already present, create a ssh key on you laptop================ # ssh-keygen -t rsa ================ Enable passwordless ssh from my laptop. Need to type in the root password for the remote machines. Then, I no longer need to type in the password when I ssh or scp from my laptop to servers. ================ #!/usr/bin/env bash for server in `cat servers.txt` do   echo root@$server   cat ~/.ssh/id_rsa.pub | ssh root@$server "cat >> .ssh/authorized_keys" done ================ servers.txt ================ testhost1testhost2 ================ etc_system_addins ================ set rpcmod:clnt_max_conns=8 set zfs:zfs_arc_max=0x1000000000 set nfs:nfs3_bsize=131072 set nfs:nfs4_bsize=131072 ================ ndd-nettune.txt ================ #!/sbin/sh # # ident   "@(#)ndd-nettune.xml    1.0     01/08/06 SMI" . /lib/svc/share/smf_include.sh . /lib/svc/share/net_include.sh # Make sure that the libraries essential to this stage of booting  can be found. LD_LIBRARY_PATH=/lib; export LD_LIBRARY_PATH echo "Performing Directory Server Tuning..." >> /tmp/smf.out # # Standard SuperCluster Tunables # /usr/sbin/ndd -set /dev/tcp tcp_max_buf 2097152 /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 1048576 /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 1048576 # Reset the library path now that we are past the critical stage unset LD_LIBRARY_PATH ================ ndd-nettune.xml ================ <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <!-- ident "@(#)ndd-nettune.xml 1.0 04/09/21 SMI" --> <service_bundle type='manifest' name='SUNWcsr:ndd'>   <service name='network/ndd-nettune' type='service' version='1'>     <create_default_instance enabled='true' />     <single_instance />     <dependency name='fs-minimal' type='service' grouping='require_all' restart_on='none'>       <service_fmri value='svc:/system/filesystem/minimal' />     </dependency>     <dependency name='loopback-network' grouping='require_any' restart_on='none' type='service'>       <service_fmri value='svc:/network/loopback' />     </dependency>     <dependency name='physical-network' grouping='optional_all' restart_on='none' type='service'>       <service_fmri value='svc:/network/physical' />     </dependency>     <exec_method type='method' name='start' exec='/lib/svc/method/ndd-nettune' timeout_seconds='3' > </exec_method>     <exec_method type='method' name='stop'  exec=':true'                       timeout_seconds='3' > </exec_method>     <property_group name='startd' type='framework'>       <propval name='duration' type='astring' value='transient' />     </property_group>     <stability value='Unstable' />     <template>       <common_name>     <loctext xml:lang='C'> ndd network tuning </loctext>       </common_name>       <documentation>     <manpage title='ndd' section='1M' manpath='/usr/share/man' />       </documentation>     </template>   </service> </service_bundle> ================ system_tuning.sh ================ #!/usr/bin/env bash for server in `cat servers.txt` do   cat etc_system_addins | ssh root@$server "cat >> /etc/system"   scp ndd-nettune.xml root@${server}:/var/svc/manifest/site/ndd-nettune.xml   scp ndd-nettune.txt root@${server}:/lib/svc/method/ndd-nettune   ssh root@$server chmod +x /lib/svc/method/ndd-nettune   ssh root@$server svccfg validate /var/svc/manifest/site/ndd-nettune.xml   ssh root@$server svccfg import /var/svc/manifest/site/ndd-nettune.xml done

    Read the article

  • Reading XML Content

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; using System.Diagnostics; using System.Threading; using System.Xml; using System.Reflection; namespace XMLReading { class Program     { static void Main(string[] args)         { string fileName = @"C:\temp\t.xml"; List<EmergencyContactXMLDTO> emergencyContacts = new XmlReader<EmergencyContactXMLDTO, EmergencyContactXMLDTOMapper>().Read(fileName); foreach (var item in emergencyContacts)             { Console.WriteLine(item.FileNb);             }          }     } public class XmlReader<TDTO, TMAPPER> where TDTO : BaseDTO, new() where TMAPPER : PCPWXMLDTOMapper, new()     { public List<TDTO> Read(String fileName)         { XmlTextReader reader = new XmlTextReader(fileName); List<TDTO> emergencyContacts = new List<TDTO>(); while (true)             {                 TMAPPER mapper = new TMAPPER(); bool isFound = SeekElement(reader, mapper.GetMainXMLTagName()); if (!isFound) break;                 TDTO dto = new TDTO(); foreach (var propertyKey in mapper.GetPropertyXMLMap())                 { String dtoPropertyName = propertyKey.Key; String xmlPropertyName = propertyKey.Value;                     SeekElement(reader, xmlPropertyName);                     SetValue(dto, dtoPropertyName, reader.ReadElementString());                 }                 emergencyContacts.Add(dto);             } return emergencyContacts;         } private void SetValue(Object dto, String propertyName, String value)         { PropertyInfo prop = dto.GetType().GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance);             prop.SetValue(dto, value, null);         } private bool SeekElement(XmlTextReader reader, String elementName)         { while (reader.Read())             { XmlNodeType nodeType = reader.MoveToContent(); if (nodeType != XmlNodeType.Element)                 { continue;                 } if (reader.Name == elementName)                 { return true;                 }             } return false;         }     } public class BaseDTO     {     } public class EmergencyContactXMLDTO : BaseDTO     { public string FileNb { get; set; } public string ContactName { get; set; } public string ContactPhoneNumber { get; set; } public string Relationship { get; set; } public string DoctorName { get; set; } public string DoctorPhoneNumber { get; set; } public string HospitalName { get; set; }     } public interface PCPWXMLDTOMapper     { Dictionary<string, string> GetPropertyXMLMap(); String GetMainXMLTagName();     } public class EmergencyContactXMLDTOMapper : PCPWXMLDTOMapper     { public Dictionary<string, string> GetPropertyXMLMap()         { return new Dictionary<string, string>             {                 { "FileNb", "XFileNb" },                 { "ContactName", "XContactName"},                 { "ContactPhoneNumber", "XContactPhoneNumber" },                 { "Relationship", "XRelationship" },                 { "DoctorName", "XDoctorName" },                 { "DoctorPhoneNumber", "XDoctorPhoneNumber" },                 { "HospitalName", "XHospitalName" },             };         } public String GetMainXMLTagName()         { return "EmergencyContact";         }     } } span.fullpost {display:none;}

    Read the article

  • creating objects in list

    - by prince23
    List myList = new List { new Users{ Name="Kumar", Gender="M", Age=75, Parent="All"}, new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, new Users{ Name="kian", Gender="M",Age=20, Parent="Suresh"}, new Users{ Name="Nathani", Gender="M",Age=15, Parent="Suresh"}, new Users{ Name="Peter", Gender="M",Age=90, Parent="All"}, new Users{ Name="Mica", Age=56, Gender="M",Parent="Peter"}, new Users{ Name="Linderman", Gender="M",Age=51, Parent="Peter"}, new Users{ Name="john", Age=25, Gender="M",Parent="Mica"}, new Users{ Name="tom", Gender="M",Age=21, Parent="Mica"}, new Users{ Name="Ando", Age=64, Gender="M",Parent="All"}, new Users{ Name="Maya", Age=24, Gender="M",Parent="Ando"}, new Users{ Name="Niki Sanders", Gender="F",Age=2, Parent="Maya"}, new Users{ Name="Angela Patrelli", Gender="F",Age=3, Parent="Maya"}, }; now i need to format the data like here i need to check the parent based on the parent i will be creating objects under the parent objects now { Name="Kumar", Gender="M", Age=75, Parent="All" } this is the top level as a property Parent ="all" now under parent object kumar here we again create a new object to store these information under object(kumar who is the parent) new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, what i need to do here is i need to check the parent based on the parent create further objects under it ex: i need to achive like this. looking for the syntax how i can do it. public class SampleProjectData { public static ObservableCollection<Product> GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project2", OverallStartTime = dtS + TimeSpan.FromDays(1.5), OverallEndTime = dtS + TimeSpan.FromDays(5.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Victor's Task" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project3", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Jason's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(7), EndTime = dtS + TimeSpan.FromDays(9), TaskName = "Jason's Task 2" }); teams[0].Projects.Add(emp); teams.Add(new Product() { PDName = "Product2", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project4", OverallStartTime = dtS + TimeSpan.FromDays(0.5), OverallEndTime = dtS + TimeSpan.FromDays(3.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Vicky's Task" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project5", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2.2), EndTime = dtS + TimeSpan.FromDays(3.8), TaskName = "Oleg's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(6), TaskName = "Oleg's Task 2" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(8), EndTime = dtS + TimeSpan.FromDays(9.6), TaskName = "Oleg's Task 3" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project6", OverallStartTime = dtS + TimeSpan.FromDays(2.5), OverallEndTime = dtS + TimeSpan.FromDays(4.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(0.8), EndTime = dtS + TimeSpan.FromDays(2), TaskName = "Kim's Task" }); teams[1].Projects.Add(emp); teams.Add(new Product() { PDName = "Product3", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project7", OverallStartTime = dtS + TimeSpan.FromDays(5), OverallEndTime = dtS + TimeSpan.FromDays(7.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Balaji's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(8), TaskName = "Balaji's Task 2" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project8", OverallStartTime = dtS + TimeSpan.FromDays(3), OverallEndTime = dtS + TimeSpan.FromDays(6.3) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.75), EndTime = dtS + TimeSpan.FromDays(2.25), TaskName = "Li's Task" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project9", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2), EndTime = dtS + TimeSpan.FromDays(3), TaskName = "Stacy's Task" }); teams[2].Projects.Add(emp); return teams; } } above is an sample data where i am grouping them with static data in the same way i need **to for teh data which is cmg from DB and i need to store them list** all these three data are comig from different services. and i am storing them in a list now i have three tables data Product , Project, Task. all the data are coming from webservies. i have created three list where i am storing the data in list. List<Project>objpro= new List<Project>(); List<Product>objproduct= new List<Product>(); LIst<Task>objTask= new List<Task>(); **now what i need to do is i need to do the mapping between these tables. if you see above. i have object of Product under Product i have added object of Project and then under project object i have added task object.** now from the above data which is stored in the list i need to do the same mapping between class. and group the data. public class Product : INotifyPropertyChanged { public Product() { this.Projects = new ObservableCollection<Project>(); } public string PDName { get; set; } public ObservableCollection<Project> Projects { get; set; } private DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } private DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Project : INotifyPropertyChanged { public Project() { this.Tasks = new ObservableCollection<Task>(); } public string PName { get; set; } public ObservableCollection<Task> Tasks { get; set; } DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Task : INotifyPropertyChanged { public string TaskName { get; set; } DateTime _st; public DateTime StartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("StartTime"); this.EndTime = value + dur; } } } private DateTime _et; public DateTime EndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("EndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } hope my question is clear

    Read the article

  • Asp.net Custom user control button. How to stop multiple clicks by user.

    - by Laurence Burke
    I am trying to modify an open source Forum called YetAnotherForum.net in the project they have a custom user control called Yaf:ThemeButton. Now its rendered as an anchor with an onclick method in this code ThemeButton.cs using System; using System.Web.UI; using System.Web.UI.WebControls; namespace YAF.Controls { /// <summary> /// The theme button. /// </summary> public class ThemeButton : BaseControl, IPostBackEventHandler { /// <summary> /// The _click event. /// </summary> protected static object _clickEvent = new object(); /// <summary> /// The _command event. /// </summary> protected static object _commandEvent = new object(); /// <summary> /// The _attribute collection. /// </summary> protected AttributeCollection _attributeCollection; /// <summary> /// The _localized label. /// </summary> protected LocalizedLabel _localizedLabel = new LocalizedLabel(); /// <summary> /// The _theme image. /// </summary> protected ThemeImage _themeImage = new ThemeImage(); /// <summary> /// Initializes a new instance of the <see cref="ThemeButton"/> class. /// </summary> public ThemeButton() : base() { Load += new EventHandler(ThemeButton_Load); this._attributeCollection = new AttributeCollection(ViewState); } /// <summary> /// ThemePage for the optional button image /// </summary> public string ImageThemePage { get { return this._themeImage.ThemePage; } set { this._themeImage.ThemePage = value; } } /// <summary> /// ThemeTag for the optional button image /// </summary> public string ImageThemeTag { get { return this._themeImage.ThemeTag; } set { this._themeImage.ThemeTag = value; } } /// <summary> /// Localized Page for the optional button text /// </summary> public string TextLocalizedPage { get { return this._localizedLabel.LocalizedPage; } set { this._localizedLabel.LocalizedPage = value; } } /// <summary> /// Localized Tag for the optional button text /// </summary> public string TextLocalizedTag { get { return this._localizedLabel.LocalizedTag; } set { this._localizedLabel.LocalizedTag = value; } } /// <summary> /// Defaults to "yafcssbutton" /// </summary> public string CssClass { get { return (ViewState["CssClass"] != null) ? ViewState["CssClass"] as string : "yafcssbutton"; } set { ViewState["CssClass"] = value; } } /// <summary> /// Setting the link property will make this control non-postback. /// </summary> public string NavigateUrl { get { return (ViewState["NavigateUrl"] != null) ? ViewState["NavigateUrl"] as string : string.Empty; } set { ViewState["NavigateUrl"] = value; } } /// <summary> /// Localized Page for the optional link description (title) /// </summary> public string TitleLocalizedPage { get { return (ViewState["TitleLocalizedPage"] != null) ? ViewState["TitleLocalizedPage"] as string : "BUTTON"; } set { ViewState["TitleLocalizedPage"] = value; } } /// <summary> /// Localized Tag for the optional link description (title) /// </summary> public string TitleLocalizedTag { get { return (ViewState["TitleLocalizedTag"] != null) ? ViewState["TitleLocalizedTag"] as string : string.Empty; } set { ViewState["TitleLocalizedTag"] = value; } } /// <summary> /// Non-localized Title for optional link description /// </summary> public string TitleNonLocalized { get { return (ViewState["TitleNonLocalized"] != null) ? ViewState["TitleNonLocalized"] as string : string.Empty; } set { ViewState["TitleNonLocalized"] = value; } } /// <summary> /// Gets Attributes. /// </summary> public AttributeCollection Attributes { get { return this._attributeCollection; } } /// <summary> /// Gets or sets CommandName. /// </summary> public string CommandName { get { if (ViewState["commandName"] != null) { return ViewState["commandName"].ToString(); } return null; } set { ViewState["commandName"] = value; } } /// <summary> /// Gets or sets CommandArgument. /// </summary> public string CommandArgument { get { if (ViewState["commandArgument"] != null) { return ViewState["commandArgument"].ToString(); } return null; } set { ViewState["commandArgument"] = value; } } #region IPostBackEventHandler Members /// <summary> /// The i post back event handler. raise post back event. /// </summary> /// <param name="eventArgument"> /// The event argument. /// </param> void IPostBackEventHandler.RaisePostBackEvent(string eventArgument) { OnCommand(new CommandEventArgs(CommandName, CommandArgument)); OnClick(EventArgs.Empty); } #endregion /// <summary> /// Setup the controls before render /// </summary> /// <param name="sender"> /// </param> /// <param name="e"> /// </param> private void ThemeButton_Load(object sender, EventArgs e) { if (!String.IsNullOrEmpty(this._themeImage.ThemeTag)) { // add the theme image... Controls.Add(this._themeImage); } // render the text if available if (!String.IsNullOrEmpty(this._localizedLabel.LocalizedTag)) { Controls.Add(this._localizedLabel); } } /// <summary> /// The render. /// </summary> /// <param name="output"> /// The output. /// </param> protected override void Render(HtmlTextWriter output) { // get the title... string title = GetLocalizedTitle(); output.BeginRender(); output.WriteBeginTag("a"); output.WriteAttribute("id", ClientID); if (!String.IsNullOrEmpty(CssClass)) { output.WriteAttribute("class", CssClass); } if (!String.IsNullOrEmpty(title)) { output.WriteAttribute("title", title); } else if (!String.IsNullOrEmpty(TitleNonLocalized)) { output.WriteAttribute("title", TitleNonLocalized); } if (!String.IsNullOrEmpty(NavigateUrl)) { output.WriteAttribute("href", NavigateUrl.Replace("&", "&amp;")); } else { // string.Format("javascript:__doPostBack('{0}','{1}')",this.ClientID,"")); output.WriteAttribute("href", Page.ClientScript.GetPostBackClientHyperlink(this, string.Empty)); } bool wroteOnClick = false; // handle additional attributes (if any) if (this._attributeCollection.Count > 0) { // add attributes... foreach (string key in this._attributeCollection.Keys) { // get the attribute and write it... if (key.ToLower() == "onclick") { // special handling... add to it... output.WriteAttribute(key, string.Format("{0};{1}", this._attributeCollection[key], "this.blur();this.display='none';")); wroteOnClick = true; } else if (key.ToLower().StartsWith("on") || key.ToLower() == "rel" || key.ToLower() == "target") { // only write javascript attributes -- and a few other attributes... output.WriteAttribute(key, this._attributeCollection[key]); } } } // IE fix if (!wroteOnClick) { output.WriteAttribute("onclick", "this.blur();this.style.display='none';"); } output.Write(HtmlTextWriter.TagRightChar); output.WriteBeginTag("span"); output.Write(HtmlTextWriter.TagRightChar); // render the optional controls (if any) base.Render(output); output.WriteEndTag("span"); output.WriteEndTag("a"); output.EndRender(); } /// <summary> /// The get localized title. /// </summary> /// <returns> /// The get localized title. /// </returns> protected string GetLocalizedTitle() { if (Site != null && Site.DesignMode == true && !String.IsNullOrEmpty(TitleLocalizedTag)) { return String.Format("[TITLE:{0}]", TitleLocalizedTag); } else if (!String.IsNullOrEmpty(TitleLocalizedPage) && !String.IsNullOrEmpty(TitleLocalizedTag)) { return PageContext.Localization.GetText(TitleLocalizedPage, TitleLocalizedTag); } else if (!String.IsNullOrEmpty(TitleLocalizedTag)) { return PageContext.Localization.GetText(TitleLocalizedTag); } return null; } /// <summary> /// The on click. /// </summary> /// <param name="e"> /// The e. /// </param> protected virtual void OnClick(EventArgs e) { var handler = (EventHandler) Events[_clickEvent]; if (handler != null) { handler(this, e); } } /// <summary> /// The on command. /// </summary> /// <param name="e"> /// The e. /// </param> protected virtual void OnCommand(CommandEventArgs e) { var handler = (CommandEventHandler) Events[_commandEvent]; if (handler != null) { handler(this, e); } RaiseBubbleEvent(this, e); } /// <summary> /// The click. /// </summary> public event EventHandler Click { add { Events.AddHandler(_clickEvent, value); } remove { Events.RemoveHandler(_clickEvent, value); } } /// <summary> /// The command. /// </summary> public event CommandEventHandler Command { add { Events.AddHandler(_commandEvent, value); } remove { Events.RemoveHandler(_commandEvent, value); } } } } now that is just cs file its handled like this in the .ascx page of the actual website <YAF:ThemeButton ID="Save" runat="server" CssClass="yafcssbigbutton leftItem" TextLocalizedTag="SAVE" OnClick="Save_Click" /> now it is given an OnClick codebehind function that does some serverside function like this protected void Save_Click(object sender, EventArgs e) { //some serverside code here } now I have a problem with the user being able to click multiple times and firing that serverside function multiple times. I have added in the code as of right now an extra onclick="this.style.display='none'" in the .cs code but that is a ugly fix I was wondering if anyone would have a better idea of disabling the ThemeButton clientside?? pls any feedback if I need to give more examples or further explain the question thanks.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >