Search Results

Search found 216 results on 9 pages for 'jane mcdowell'.

Page 9/9 | < Previous Page | 5 6 7 8 9 

  • ASP NET MVC (loading data from database)

    - by rah.deex
    hi experts, its me again... i have some code like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcGridSample.Models { public class CustomerService { private List<SVC> Customers { get { List<SVC> customers; if (HttpContext.Current.Session["Customers"] != null) { customers = (List<SVC>) HttpContext.Current.Session["Customers"]; } else { //Create customer data store and save in session customers = new List<SVC>(); InitCustomerData(customers); HttpContext.Current.Session["Customers"] = customers; } return customers; } } public SVC GetByID(int customerID) { return this.Customers.AsQueryable().First(customer => customer.seq_ == customerID); } public IQueryable<SVC> GetQueryable() { return this.Customers.AsQueryable(); } public void Add(SVC customer) { this.Customers.Add(customer); } public void Update(SVC customer) { } public void Delete(int customerID) { this.Customers.RemoveAll(customer => customer.seq_ == customerID); } private void InitCustomerData(List<SVC> customers) { customers.Add(new SVC { ID = 1, FirstName = "John", LastName = "Doe", Phone = "1111111111", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("5/3/2007") }); customers.Add(new SVC { ID = 2, FirstName = "Jane", LastName = "Doe", Phone = "2222222222", Email = "[email protected]", OrdersPlaced = 3, DateOfLastOrder = DateTime.Parse("4/5/2008") }); customers.Add(new SVC { ID = 3, FirstName = "John", LastName = "Smith", Phone = "3333333333", Email = "[email protected]", OrdersPlaced = 25, DateOfLastOrder = DateTime.Parse("4/5/2000") }); customers.Add(new SVC { ID = 4, FirstName = "Eddie", LastName = "Murphy", Phone = "4444444444", Email = "[email protected]", OrdersPlaced = 1, DateOfLastOrder = DateTime.Parse("4/5/2003") }); customers.Add(new SVC { ID = 5, FirstName = "Ziggie", LastName = "Ziggler", Phone = null, Email = "[email protected]", OrdersPlaced = 0, DateOfLastOrder = null }); customers.Add(new SVC { ID = 6, FirstName = "Michael", LastName = "J", Phone = "666666666", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("12/3/2007") }); } } } those codes is an example that i've got from the internet.. in that case, the data is created and saved in session before its shown.. the things that i want to ask is how if i want to load the data from table? i'am a newbie here.. please help :) thank b4 for advance..

    Read the article

  • select value of td and download content of selected tds

    - by user1272145
    I have this table <table class="results" id="summary_results"> <tr> <td>select all</td> <td>name</td> <td>id</td> <td>address</td> <td>url</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>john doe</td> <td>1</td> <td>33.85 some address</td> <td>http://www.domain.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>jane doe</td> <td>2</td> <td>34.85 some address</td> <td>http://www.domain2.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>sam</td> <td>3</td> <td>33.86 some address</td> <td>http://www.domain3.com</td> </tr> </table> I would like to select all the rows then download the content of the urls knowing that each url is linked to the id. for example the first url will be www.domain.com?id=1&report=report

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Are your personal insecurities screwing up your internal communications?

    - by Lucy Boyes
    I do some internal comms as part of my job. Quite a lot of it involves talking to people about stuff. I’m spending the next couple of weeks talking to lots of people about internal comms itself, because we haven’t done a lot of audience/user feedback gathering, and it turns out that if you talk to people about how they feel and what they think, you get some pretty interesting insights (and an idea of what to do next that isn’t just based on guesswork and generalising from self). Three things keep coming up from talking to people about what we suck at  in terms of internal comms. And, as far as I can tell, they’re all examples where personal insecurity on the part of the person doing the communicating makes the experience much worse for the people on the receiving end. 1. Spending time telling people how you’re going to do something, not what you’re doing and why Imagine you’ve got to give an update to a lot of people who don’t work in your area or department but do have an interest in what you’re doing (either because they want to know because they’re curious or because they need to know because it’s going to affect their work too). You don’t want to look bad at your job. You want to make them think you’ve got it covered – ideally because you do*. And you want to reassure them that there’s lots of exciting work going on in your area to make [insert thing of choice] happen to [insert thing of choice] so that [insert group of people] will be happy. That’s great! You’re doing a good job and you want to tell people about it. This is good comms stuff right here. However, you’re slightly afraid you might secretly be stupid or lazy or incompetent. And you’re exponentially more afraid that the people you’re talking to might think you’re stupid or lazy or incompetent. Or pointless. Or not-adding-value. Or whatever the thing that’s the worst possible thing to be in your company is. So you open by mentioning all the stuff you’re going to do, spending five minutes or so making sure that everyone knows that you’re DOING lots of STUFF. And the you talk for the rest of the time about HOW you’re going to do the stuff, because that way everyone will know that you’ve thought about this really hard and done tons of planning and had lots of great ideas about process and that you’ve got this one down. That’s the stuff you’ve got to say, right? To prove you’re not fundamentally worthless as a human being? Well, maybe. But probably not. See, the people who need to know how you’re going to do the stuff are the people doing the stuff. And those are the people in your area who you’ve (hopefully-please-for-the-love-of-everything-holy) already talked to in depth about how you’re going to do the thing (because else how could they help do it?). They are the only people who need to know the how**. It’s the difference between strategy and tactics. The people outside of your bubble of stuff-doing need to know the strategy – what it is that you’re doing, why, where you’re going with it, etc. The people on the ground with you need the strategy and the tactics, because else they won’t know how to do the stuff. But the outside people don’t really need the tactics at all. Don’t bother with the how unless your audience needs it. They probably don’t. It might make you feel better about yourself, but it’s much more likely that Bob and Jane are thinking about how long this meeting has gone on for already than how personally impressive and definitely-not-an-idiot you are for knowing how you’re going to do some work. Feeling marginally better about yourself (but, let’s face it, still insecure as heck) is not worth the cost, which in this case is the alienation of your audience. 2. Talking for too long about stuff This is kinda the same problem as the previous problem, only much less specific, and I’ve more or less covered why it’s bad already. Basic motivation: to make people think you’re not an idiot. What you do: talk for a very long time about what you’re doing so as to make it sound like you know what you’re doing and lots about it. What your audience wants: the shortest meaningful update. Some of this is a kill your darlings problem – the stuff you’re doing that seems really nifty to you seems really nifty to you, and thus you want to share it with everyone to show that you’re a smart person who thinks up nifty things to do. The downside to this is that it’s mostly only interesting to you – if other people don’t need to know, they likely also don’t care. Think about how you feel when someone is talking a lot to you about a lot of stuff that they’re doing which is at best tangentially interesting and/or relevant. You’re probably not thinking that they’re really smart and clearly know what they’re doing (unless they’re talking a lot and being really engaging about it, which is not the same as talking a lot). You’re probably thinking about something totally unrelated to the thing they’re talking about. Or the fact that you’re bored. You might even – and this is the opposite of what they’re hoping to achieve by talking a lot about stuff – be thinking they’re kind of an idiot. There’s another huge advantage to paring down what you’re trying to say to the barest possible points – it clarifies your thinking. The lightning talk format, as well as other formats which limit the time and/or number of slides you have to say a thing, are really good for doing this. It’s incredibly likely that your audience in this case (the people who need to know some things about your thing but not all the things about your thing) will get everything they need to know from five minutes of you talking about it, especially if trying to condense ALL THE THINGS into a five-minute talk has helped you get clear in your own mind what you’re doing, what you’re trying to say about what you’re doing and why you’re doing it. The bonus of this is that by being clear in your thoughts and in what you say, and in not taking up lots of people’s time to tell them stuff they don’t really need to know, you actually come across as much, much smarter than the person who talks for half an hour or more about things that are semi-relevant at best. 3. Waiting until you’ve got every detail sorted before announcing a big change to the people affected by it This is the worst crime on the list. It’s also human nature. Announcing uncertainty – that something important is going to happen (big reorganisation, product getting canned, etc.) but you’re not quite sure what or when or how yet – is scary. There are risks to it. Uncertainty makes people anxious. It might even paralyse them. You can’t run a business while you’re figuring out what to do if you’ve paralysed everyone with fear over what the future might bring. And you’re scared that they might think you’re not the right person to be in charge of [thing] if you don’t even know what you’re doing with it. Best not to say anything until you know exactly what’s going to happen and you can reassure them all, right? Nope. The people who are going to be affected by whatever it is that you don’t quite know all the details of yet aren’t stupid***. You wouldn’t have hired them if they were. They know something’s up because you’ve got your guilty face on and you keep pulling people into meeting rooms and looking vaguely worried. Here’s the deal: it’s a lot less stressful for everyone (including you) if you’re up front from the beginning. We took this approach during a recent company-wide reorganisation and got really positive feedback. People would much, much rather be told that something is going to happen but you’re not entirely sure what it is yet than have you wait until it’s all fixed up and then fait accompli the heck out of them. They will tell you this themselves if you ask them. And here’s why: by waiting until you know exactly what’s going on to communicate, you remove any agency that the people that the thing is going to happen to might otherwise have had. I know you’re scared that they might get scared – and that’s natural and kind of admirable – but it’s also patronising and infantilising. Ask someone whether they’d rather work on a project which has an openly uncertain future from the beginning, or one where everything’s great until it gets shut down with no forewarning, and very few people are going to tell you they’d prefer the latter. Uncertainty is humanising. It’s you admitting that you don’t have all the answers, which is great, because no one does. It allows you to be consultative – you can actually ask other people what they think and how they feel and what they’d like to do and what they think you should do, and they’ll thank you for it and feel listened to and respected as people and colleagues. Which is a really good reason to start talking to them about what’s going on as soon as you know something’s going on yourself. All of the above assumes you actually care about talking to the people who work with you and for you, and that you’d like to do the right thing by them. If that’s not the case, you can cheerfully disregard the advice here, but if it is, you might want to think about the ways above – and the inevitable countless other ways – that making internal communication about you and not about your audience could actually be doing the people you’re trying to communicate with a huge disservice. So take a deep breath and talk. For five minutes or so. About the important things. Not the other things. As soon as you possibly can. And you’ll be fine.   *Of course you do. You’re good at your job. Don’t worry. **This might not always be true, but it is most of the time. Other people who need to know the how will either be people who you’ve already identified as needing-to-know and thus part of the same set as the people in you’re area you’ve already discussed this with, or else they’ll ask you. But don’t bring this stuff up unless someone asks for it, because most of the people in the audience really don’t care and you’re wasting their time. ***I mean, they might be. But let’s give them the benefit of the doubt and assume they’re not.

    Read the article

  • reading Twitter API with JSON framework

    - by iPixFolio
    Hi, I'm building a twitter reader into an app. I'm using this JSON library to parse the twitter API. I'm seeing some odd results on certain messages. I know that the Twitter API returns results in UTF8 format. I'm wondering if I'm doing something wrong when reading the JSON parsed fields. My code is spread out across multiple classes so it's hard to give a concise code drop with the symptoms, but here's what I've got: I am using ASIHTTP for async HTTP processing. Here is processing a response from ASIHTTP: ... NSMutableString* tempString = [[NSMutableString alloc] initWithString:[request responseString]]; NSError *error; SBJSON *json = [[SBJSON alloc] init]; id JSONresponse = [json objectWithString:tempString error:&error]; [tempString release]; [json release]; if (JSONresponse) { self.response = JSONresponse; ... self.response holds the JSON representation of the result from the Twitter call. Now, I will take the JSON response and write each tweet into a container object (Tweet). in the following code, the response from above is referenced as request.response: ... // save list of albums to local cache for (NSDictionary* response in request.response) { Tweet* tweet = [[Tweet alloc] init]; tweet.text = [response objectForKey:@"text"]; tweet.id = [response objectForKey:@"id"]; tweet.created = [response objectForKey:@"created_at"]; [Tweet addTweet:tweet]; [tweet release]; } ... at this point, I have a container holding the tweets. I'm only keeping 3 fields from the tweet: "id", "text", and "created_at". the "text" field is the problem field. To display the tweets, I build an HTML page from the container of tweets, like this: ... Tweet* tweet = nil; for (int i = 0; i < [Tweet tweetCount]; i++) { tweet = [Tweet tweetAtIndex:i]; [html appendString:@"<div class='tweet'>"]; [html appendFormat:@"<div class='tweet-date'>%@</div>", tweet.created ]; [html appendFormat:@"<div class='tweet-text'>%@</div>", tweet.text ]; [html appendString:@"</div>"]; } ... In another routine, I save the HTML page to a temp file. if (html && [html length] > 0 ) { NSString* uniqueString = [[NSProcessInfo processInfo] globallyUniqueString]; NSString* filename = [NSString stringWithFormat:@"%@.html", uniqueString ]; filename = [tempDir stringByAppendingPathComponent:filename]; NSError* error = nil; [html writeToFile:filename atomically:NO encoding:NSUTF8StringEncoding error:&error]; ... I then create a URLRequest from the file and load it into an UIWebview: NSURL* url = [NSURL fileURLWithPath:filename]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; [self.webView loadRequest:request]; ... At this point, I can see the tweets in a browser window. some of the tweets will show invalid characters like this: iPhone 4 ad spoofed with Glee’s Jane Lynch ... Glee’s should be Glee's Can anybody shed any light on what I'm doing wrong and offer suggestions on how to fix? basically, to summarize: I'm reading a UTF8 feed with JSON I write the UTF8 strings into an HTML file I display the HTML file with UIWebview. some of the UTF8 strings are not properly decoded. I need to know where to decode them and how to do it. thanks! Mark

    Read the article

  • How to get javascript object references or reference count?

    - by Tauren
    How to get reference count for an object Is it possible to determine if a javascript object has multiple references to it? Or if it has references besides the one I'm accessing it with? Or even just to get the reference count itself? Can I find this information from javascript itself, or will I need to keep track of my own reference counters. Obviously, there must be at least one reference to it for my code access the object. But what I want to know is if there are any other references to it, or if my code is the only place it is accessed. I'd like to be able to delete the object if nothing else is referencing it. If you know the answer, there is no need to read the rest of this question. Below is just an example to make things more clear. Use Case In my application, I have a Repository object instance called contacts that contains an array of ALL my contacts. There are also multiple Collection object instances, such as friends collection and a coworkers collection. Each collection contains an array with a different set of items from the contacts Repository. Sample Code To make this concept more concrete, consider the code below. Each instance of the Repository object contains a list of all items of a particular type. You might have a repository of Contacts and a separate repository of Events. To keep it simple, you can just get, add, and remove items, and add many via the constructor. var Repository = function(items) { this.items = items || []; } Repository.prototype.get = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { return this.items[i]; } } } Repository.prototype.add = function(item) { if (toString.call(item) === "[object Array]") { this.items.concat(item); } else { this.items.push(item); } } Repository.prototype.remove = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { this.removeIndex(i); } } } Repository.prototype.removeIndex = function(index) { if (items[index]) { if (/* items[i] has more than 1 reference to it */) { // Only remove item from repository if nothing else references it this.items.splice(index,1); return; } } } Note the line in remove with the comment. I only want to remove the item from my master repository of objects if no other objects have a reference to the item. Here's Collection: var Collection = function(repo,items) { this.repo = repo; this.items = items || []; } Collection.prototype.remove = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { // Remove object from this collection this.items.splice(i,1); // Tell repo to remove it (only if no other references to it) repo.removeIndxe(i); return; } } } And then this code uses Repository and Collection: var contactRepo = new Repository([ {id: 1, name: "Joe"}, {id: 2, name: "Jane"}, {id: 3, name: "Tom"}, {id: 4, name: "Jack"}, {id: 5, name: "Sue"} ]); var friends = new Collection( contactRepo, [ contactRepo.get(2), contactRepo.get(4) ] ); var coworkers = new Collection( contactRepo, [ contactRepo.get(1), contactRepo.get(2), contactRepo.get(5) ] ); contactRepo.items; // contains item ids 1, 2, 3, 4, 5 friends.items; // contains item ids 2, 4 coworkers.items; // contains item ids 1, 2, 5 coworkers.remove(2); contactRepo.items; // contains item ids 1, 2, 3, 4, 5 friends.items; // contains item ids 2, 4 coworkers.items; // contains item ids 1, 5 friends.remove(4); contactRepo.items; // contains item ids 1, 2, 3, 5 friends.items; // contains item ids 2 coworkers.items; // contains item ids 1, 5 Notice how coworkers.remove(2) didn't remove id 2 from contactRepo? This is because it was still referenced from friends.items. However, friends.remove(4) causes id 4 to be removed from contactRepo, because no other collection is referring to it. Summary The above is what I want to do. I'm sure there are ways I can do this by keeping track of my own reference counters and such. But if there is a way to do it using javascript's built-in reference management, I'd like to hear about how to use it.

    Read the article

  • Fluent NHibernate/SQL Server 2008 insert query problem

    - by Mark
    Hi all, I'm new to Fluent NHibernate and I'm running into a problem. I have a mapping defined as follows: public PersonMapping() { Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.FirstName).Not.Nullable().Length(50); Map(p => p.MiddleInitial).Nullable().Length(1); Map(p => p.LastName).Not.Nullable().Length(50); Map(p => p.Suffix).Nullable().Length(3); Map(p => p.SSN).Nullable().Length(11); Map(p => p.BirthDate).Nullable(); Map(p => p.CellPhone).Nullable().Length(12); Map(p => p.HomePhone).Nullable().Length(12); Map(p => p.WorkPhone).Nullable().Length(12); Map(p => p.OtherPhone).Nullable().Length(12); Map(p => p.EmailAddress).Nullable().Length(50); Map(p => p.DriversLicenseNumber).Nullable().Length(50); Component<Address>(p => p.CurrentAddress, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); Map(p => p.EyeColor).Nullable().Length(3); Map(p => p.HairColor).Nullable().Length(3); Map(p => p.Gender).Nullable().Length(1); Map(p => p.Height).Nullable(); Map(p => p.Weight).Nullable(); Map(p => p.Race).Nullable().Length(1); Map(p => p.SkinTone).Nullable().Length(3); HasMany(p => p.PriorAddresses).Cascade.All(); } public PreviousAddressMapping() { Table("PriorAddress"); Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.EndEffectiveDate).Not.Nullable(); Component<Address>(p => p.Address, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); } My test is [Test] public void can_correctly_map_Person_with_Addresses() { var myPerson = new Person("Jane", "", "Doe"); var priorAddresses = new[] { new PreviousAddress(ObjectMother.GetAddress1(), DateTime.Parse("05/13/2010")), new PreviousAddress(ObjectMother.GetAddress2(), DateTime.Parse("05/20/2010")) }; new PersistenceSpecification<Person>(Session) .CheckProperty(c => c.FirstName, myPerson.FirstName) .CheckProperty(c => c.LastName, myPerson.LastName) .CheckProperty(c => c.MiddleInitial, myPerson.MiddleInitial) .CheckList(c => c.PriorAddresses, priorAddresses) .VerifyTheMappings(); } GetAddress1() (yeah, horrible name) has Line2 == null The tables seem to be created correctly in sql server 2008, but the test fails with a SQLException "String or binary data would be truncated." When I grab the sql statement in SQL Profiler, I get exec sp_executesql N'INSERT INTO PriorAddress (Line1, Line2, City, State, Zip, EndEffectiveDate, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6)',N'@p0 nvarchar(18),@p1 nvarchar(4000),@p2 nvarchar(10),@p3 nvarchar(2),@p4 nvarchar(5),@p5 datetime,@p6 int',@p0=N'6789 Somewhere Rd.',@p1=NULL,@p2=N'Hot Coffee',@p3=N'MS',@p4=N'09876',@p5='2010-05-13 00:00:00',@p6=1001 Notice the @p1 parameter is being set to nvarchar(4000) and being passed a NULL value. Why is it setting the parameter to nvarchar(4000)? How can I fix it? Thanks!

    Read the article

  • Nested property binding

    - by EtherealMonkey
    Recently, I have been trying to wrap my mind around the BindingList<T> and INotifyPropertChanged. More specifically - How do I make a collection of objects (having objects as properties) which will allow me to subscribe to events throughout the tree? To that end, I have examined the code offered as examples by others. One such project that I downloaded was Nested Property Binding - CodeProject by "seesharper". Now, the article explains the implementation, but there was a question by "Someone@AnotherWorld" about "INotifyPropertyChanged in nested objects". His question was: Hi, nice stuff! But after a couple of time using your solution I realize the ObjectBindingSource ignores the PropertyChanged event of nested objects. E.g. I've got a class 'Foo' with two properties named 'Name' and 'Bar'. 'Name' is a string an 'Bar' reference an instance of class 'Bar', which has a 'Name' property of type string too and both classes implements INotifyPropertyChanged. With your binding source reading and writing with both properties ('Name' and 'Bar_Name') works fine but the PropertyChanged event works only for the 'Name' property, because the binding source listen only for events of 'Foo'. One workaround is to retrigger the PropertyChanged event in the appropriate class (here 'Foo'). What's very unclean! The other approach would be to extend ObjectBindingSource so that all owner of nested property which implements INotifyPropertyChanged get used for receive changes, but how? Thanks! I had asked about BindingList<T> yesterday and received a good answer from Aaronaught. In my question, I had a similar point as "Someone@AnotherWorld": if Keywords were to implement INotifyPropertyChanged, would changes be accessible to the BindingList through the ScannedImage object? To which Aaronaught's response was: No, they will not. BindingList only looks at the specific object in the list, it has no ability to scan all dependencies and monitor everything in the graph (nor would that always be a good idea, if it were possible). I understand Aaronaught's comment regarding this behavior not necessarily being a good idea. Additionally, his suggestion to have my bound object "relay" events on behalf of it's member objects works fine and is perfectly acceptable. For me, "re-triggering" the PropertyChanged event does not seem so unclean as "Someone@AnotherWorld" laments. I do understand why he protests - in the interest of loosely coupled objects. However, I believe that coupling between objects that are part of a composition is logical and not so undesirable as this may be in other scenarios. (I am a newb, so I could be waaayyy off base.) Anyway, in the interest of exploring an answer to the question by "Someone@AnotherWorld", I altered the MainForm.cs file of the example project from Nested Property Binding - CodeProject by "seesharper" to the following: using System; using System.Collections.Generic; using System.ComponentModel; using System.Core.ComponentModel; using System.Windows.Forms; namespace ObjectBindingSourceDemo { public partial class MainForm : Form { private readonly List<Customer> _customers = new List<Customer>(); private readonly List<Product> _products = new List<Product>(); private List<Order> orders; public MainForm() { InitializeComponent(); dataGridView1.AutoGenerateColumns = false; dataGridView2.AutoGenerateColumns = false; CreateData(); } private void CreateData() { _customers.Add( new Customer(1, "Jane Wilson", new Address("98104", "6657 Sand Pointe Lane", "Seattle", "USA"))); _customers.Add( new Customer(1, "Bill Smith", new Address("94109", "5725 Glaze Drive", "San Francisco", "USA"))); _customers.Add( new Customer(1, "Samantha Brown", null)); _products.Add(new Product(1, "Keyboard", 49.99)); _products.Add(new Product(2, "Mouse", 10.99)); _products.Add(new Product(3, "PC", 599.99)); _products.Add(new Product(4, "Monitor", 299.99)); _products.Add(new Product(5, "LapTop", 799.99)); _products.Add(new Product(6, "Harddisc", 89.99)); customerBindingSource.DataSource = _customers; productBindingSource.DataSource = _products; orders = new List<Order>(); orders.Add(new Order(1, DateTime.Now, _customers[0])); orders.Add(new Order(2, DateTime.Now, _customers[1])); orders.Add(new Order(3, DateTime.Now, _customers[2])); #region Added by me OrderLine orderLine1 = new OrderLine(_products[0], 1); OrderLine orderLine2 = new OrderLine(_products[1], 3); orderLine1.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orderLine2.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orders[0].OrderLines.Add(orderLine1); orders[0].OrderLines.Add(orderLine2); #endregion // Removed by me in lieu of region above. //orders[0].OrderLines.Add(new OrderLine(_products[0], 1)); //orders[0].OrderLines.Add(new OrderLine(_products[1], 3)); ordersBindingSource.DataSource = orders; } #region Added by me // Have to wait until the form is Shown to wire up the events // for orderDetailsBindingSource. Otherwise, they are triggered // during MainForm().InitializeComponent(). private void MainForm_Shown(object sender, EventArgs e) { orderDetailsBindingSource.AddingNew += new AddingNewEventHandler(orderDetailsBindSrc_AddingNew); orderDetailsBindingSource.CurrentItemChanged += new EventHandler(orderDetailsBindSrc_CurrentItemChanged); orderDetailsBindingSource.ListChanged += new ListChangedEventHandler(orderDetailsBindSrc_ListChanged); } private void orderDetailsBindSrc_AddingNew( object sender, AddingNewEventArgs e) { } private void orderDetailsBindSrc_CurrentItemChanged( object sender, EventArgs e) { } private void orderDetailsBindSrc_ListChanged( object sender, ListChangedEventArgs e) { ObjectBindingSource bindingSource = (ObjectBindingSource)sender; if (!(bindingSource.Current == null)) { // Unsure if GetType().ToString() is required b/c ToString() // *seems* // to return the same value. if (bindingSource.Current.GetType().ToString() == "ObjectBindingSourceDemo.OrderLine") { if (e.ListChangedType == ListChangedType.ItemAdded) { // I wish that I knew of a way to determine // if the 'PropertyChanged' delegate assignment is null. // I don't like the current test, but it seems to work. if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product == null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged += new PropertyChangedEventHandler( OrderLineChanged); } } if (e.ListChangedType == ListChangedType.ItemDeleted) { // Will throw exception when leaving // an OrderLine row with unitialized properties. // // I presume this is because the item // has already been 'disposed' of at this point. // *but* // Will it be actually be released from memory // if the delegate assignment for PropertyChanged // was never removed??? if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product != null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged -= new PropertyChangedEventHandler( OrderLineChanged); } } } } } private void OrderLineChanged(object sender, PropertyChangedEventArgs e) { MessageBox.Show(e.PropertyName, "Property Changed:"); } #endregion } } In the method private void orderDetailsBindSrc_ListChanged(object sender, ListChangedEventArgs e) I am able to hook up the PropertyChangedEventHandler to the OrderLine object as it is being created. However, I cannot seem to find a way to unhook the PropertyChangedEventHandler from the OrderLine object before it is being removed from the orders[i].OrderLines list. So, my questions are: Am I simply trying to do something that is very, very wrong here? Will the OrderLines object that I add the delegate assignments to ever be released from memory if the assignment is not removed? Is there a "sane" method of achieving this scenario? Also, note that this question is not specifically related to my prior. I have actually solved the issue which had prompted me to inquire before. However, I have reached a point with this particular topic of discovery where my curiosity has exceeded my patience - hopefully someone here can shed some light on this?

    Read the article

  • xslt cookbook example not working

    - by Liza dawson
    Hi I am working on this from xslt cookbook type my.xml <?xml version="1.0" encoding="UTF-8"?> <people> <person name="Al Zehtooney" age="33" sex="m" smoker="no"/> <person name="Brad York" age="38" sex="m" smoker="yes"/> <person name="Charles Xavier" age="32" sex="m" smoker="no"/> <person name="David Williams" age="33" sex="m" smoker="no"/> <person name="Edward Ulster" age="33" sex="m" smoker="yes"/> <person name="Frank Townsend" age="35" sex="m" smoker="no"/> <person name="Greg Sutter" age="40" sex="m" smoker="no"/> <person name="Harry Rogers" age="37" sex="m" smoker="no"/> <person name="John Quincy" age="43" sex="m" smoker="yes"/> <person name="Kent Peterson" age="31" sex="m" smoker="no"/> <person name="Larry Newell" age="23" sex="m" smoker="no"/> <person name="Max Milton" age="22" sex="m" smoker="no"/> <person name="Norman Lamagna" age="30" sex="m" smoker="no"/> <person name="Ollie Kensington" age="44" sex="m" smoker="no"/> <person name="John Frank" age="24" sex="m" smoker="no"/> <person name="Mary Williams" age="33" sex="f" smoker="no"/> <person name="Jane Frank" age="38" sex="f" smoker="yes"/> <person name="Jo Peterson" age="32" sex="f" smoker="no"/> <person name="Angie Frost" age="33" sex="f" smoker="no"/> <person name="Betty Bates" age="33" sex="f" smoker="no"/> <person name="Connie Date" age="35" sex="f" smoker="no"/> <person name="Donna Finster" age="20" sex="f" smoker="no"/> <person name="Esther Gates" age="37" sex="f" smoker="no"/> <person name="Fanny Hill" age="33" sex="f" smoker="yes"/> <person name="Geta Iota" age="27" sex="f" smoker="no"/> <person name="Hillary Johnson" age="22" sex="f" smoker="no"/> <person name="Ingrid Kent" age="21" sex="f" smoker="no"/> <person name="Jill Larson" age="20" sex="f" smoker="no"/> <person name="Kim Mulrooney" age="41" sex="f" smoker="no"/> <person name="Lisa Nevins" age="21" sex="f" smoker="no"/> </people> type generic-attr-to-csv.xslt <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:param name="delimiter" select=" ',' "/> <xsl:output method="text" /> <xsl:strip-space elements="*"/> <xsl:template match="/"> <xsl:for-each select="$columns"> <xsl:value-of select="@name"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> <xsl:apply-templates/> </xsl:template> <xsl:template match="/*/*"> <xsl:variable name="row" select="."/> <xsl:for-each select="$columns"> <xsl:apply-templates select="$row/@*[local-name(.)=current( )/@attr]" mode="csv:map-value"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter"/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> </xsl:template> <xsl:template match="@*" mode="map-value"> <xsl:value-of select="."/> </xsl:template> </xsl:stylesheet> type my.xsl <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:import href="generic-attr-to-csv.xslt"/> <!--Defines the mapping from attributes to columns --> <xsl:variable name="columns" select="document('')/*/csv:column"/> <csv:column name="Name" attr="name"/> <csv:column name="Age" attr="age"/> <csv:column name="Gender" attr="sex"/> <csv:column name="Smoker" attr="smoker"/> <!-- Handle custom attribute mappings --> <xsl:template match="@sex" mode="csv:map-value"> <xsl:choose> <xsl:when test=".='m'">male</xsl:when> <xsl:when test=".='f'">female</xsl:when> <xsl:otherwise>error</xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> using the apache xalan parser D:\Test>java org.apache.xalan.xslt.Process -in my.xml -xsl my.xsl -out my.csv [Fatal Error] generic-attr-to-csv.xslt:15:6: The value of attribute "select" associated with an element type "xsl:v alue-of" must not contain the '<' character. file:///D:/Test/generic-attr-to-csv.xslt; Line #15; Column #6; org.xml.sax.SAXParseException: The value of attribut e "select" associated with an element type "xsl:value-of" must not contain the '<' character. java.lang.NullPointerException at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1171) at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1060) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1268) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1251) at org.apache.xalan.xslt.Process.main(Process.java:1048) Exception in thread "main" java.lang.RuntimeException at org.apache.xalan.xslt.Process.doExit(Process.java:1155) at org.apache.xalan.xslt.Process.main(Process.java:1128) Any ideas what am i doing wrong

    Read the article

  • Where am I going wrong in my Xml Schema?

    - by chobo2
    Hi I am trying to make a XML Schema but everytime I use it and try to validate my data I get an error. I get this error: Validation of the XML Document failed! Error message(s): Could not find schema information for the element 'Email'. Line: 1 Column:1213 http://www.xmlforasp.net/SchemaValidator.aspx My Xml file I am trying to validate. <?xml version="1.0" encoding="utf-8" ?> <School> <SchoolPrefix>BCIT</SchoolPrefix> <TeacherAccounts> <Account> <StudentNumber>A00140000</StudentNumber> <Password>123456</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00000041</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A0400100</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </TeacherAccounts> <FullTimeAccounts> <Account> <StudentNumber>A00000000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00141000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </FullTimeAccounts> <PartTimeAccounts> <Account> <StudentNumber>A81020409</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A040014000</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00024040</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> <Account> <StudentNumber>A00004101</StudentNumber> <Password>1234567</Password> <Email>[email protected]</Email> </Account> </PartTimeAccounts> </School> XMl Schema <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.nothing.com" xmlns="http://www.nothing.com" elementFormDefault="qualified"> <xs:element name="School"> <xs:complexType> <xs:sequence> <xs:element name="SchoolPrefix" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="2" /> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="TeacherAccounts" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="FullTimeAccounts"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="PartTimeAccounts"> <xs:complexType> <xs:sequence> <xs:element name="Account" type="UserInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="UserInfo"> <xs:sequence> <xs:element name="StudentNumber"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Password"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Email"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:schema>

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Storing unique values into an array and comparing against a loop - PHP

    - by Aphex22
    I'm writing a PHP report which is designed to be exported purely as a CSV file, using commma delimiters. There are three columns relating to product_id, these three columns are as follows: SKU Parent / Child Parent SKU 12345 parent 12345 12345_1 child 12345 12345_2 child 12345 12345_3 child 12345 12345_4 child 12345 18099 parent 18099 18099_1 child 18099 Here's a link to the full CSV file: http://i.imgur.com/XELufRd.png At the moment the code looks like this: $sql = "select * from product WHERE on_amazon = 'on' AND active = 'on'"; $result = mysql_query($sql) or die ( mysql_error() );?> <? // set headers echo " Type, SKU, Parent / Child, Parent SKU, Product name, Manufacturer name, Gender, Product_description, Product price, Discount price, Quantity, Category, Photo 1, Photo 2, Photo 3, Photo 4, Photo 5, Photo 6, Photo 7, Photo 8, Color id, Color name, Size name <br> "; // load all stock while ($line = mysql_fetch_assoc($result) ) { ?> <?php // Loop through each possible size variation to see whether any of the quantity column has stock > 0 $con_size = array (35,355,36,37,375,38,385,39,395,40,405,41,415,42,425,43,435,44,445,45,455,46,465,47,475,48,485); $arrlength=count($con_size); for($x=0;$x<$arrlength;$x++) { // check if size is available if($line['quantity_c_size_'.$con_size[$x].'_chain'] > 0 ) { ?> <? echo 'Shoes'; ?>, <?=$line['product_id']?>, , , <?=$line['title']?>, <? $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <? $gender = $line['category']; if ($gender == 'Mens') { echo 'H'; } else{ echo 'F'; } ?>, <?=preg_replace('/[^\da-z]/i', ' ', $line['amazon_desc']) ?>, <?=$line['price']?>, <?=$line['price']?>, <?=$line['quantity_c_size_'.$con_size[$x].'_chain']?>, <? $category = $line['style1']; switch ($category) { case "ankle-boots": echo "10013"; break; case "knee-high-boots": echo "10011"; break; case "high-heel-boots": echo "10033"; break; case "low-heel-boots": echo "10014"; break; case "wedge-boots": echo "10014"; break; case "western-boots": echo "10032"; break; case "flat-shoes": echo "10034"; break; case "high-heel-shoes": echo "10039"; break; case "low-heel-shoes": echo "10039"; break; case "wedge-shoes": echo "10035"; break; case "ballerina-shoes": echo "10008"; break; case "boat-shoes": echo "10018"; break; case "loafer-shoes": echo "10037"; break; case "work-shoes": echo "10039"; break; case "flat-sandals": echo "10041"; break; case "low-heel-sandals": echo "10042"; break; case "high-heel-sandals": echo "10042"; break; case "wedge-sandals": echo "10042"; break; case "mule-sandals": echo "10038"; break; case "mary-jane-shoes": echo "10039"; break; case "sports-shoes": echo "10026"; break; case "court-shoes": echo "10035"; break; case "peep-toe-shoes": echo "10035"; break; case "flat-boots": echo "10609"; break; case "mid-calf-boots": echo "10014"; break; case "trainer-shoes": echo "10009"; break; case "wellington-boots": echo "10012"; break; case "lace-up-boots": echo "10609"; break; case "chelsea-and-jodphur-boots": echo "10609"; break; case "desert-and-chukka-boots": echo "10032"; break; case "lace-up-shoes": echo "10034"; break; case "slip-on-shoes": echo "10043"; break; case "gibson-and-derby-shoes": echo "10039"; break; case "oxford-shoes": echo "10039"; break; case "brogue-shoes": echo "10039"; break; case "winter-boots": echo "10021"; break; case "slipper-shoes": echo "10016"; break; case "mid-heel-shoes": echo "10039"; break; case "sandals-and-beach-shoes": echo "10044"; break; case "mid-heel-sandals": echo "10042"; break; case "mid-heel-boots": echo "10014"; break; default: echo ""; } ?>, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_1.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_2.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_3.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_4.jpg, , , , , <? $colour = preg_replace('/[^\da-z]/i', ' ', $line['colour']); if( preg_match( '/white.*/i', $colour)) { echo '1'; } elseif( preg_match( '/yellow.*/i', $colour)) { echo '4'; } elseif( preg_match( '/orange.*/i', $colour)) { echo '7'; } elseif( preg_match( '/red.*/i', $colour)) { echo '8'; } elseif( preg_match( '/pink.*/i', $colour)) { echo '13'; } elseif( preg_match( '/purple.*/i', $colour)) { echo '15'; } elseif( preg_match( '/blue.*/i', $colour)) { echo '19'; } elseif( preg_match( '/green.*/i', $colour)) { echo '25'; } elseif( preg_match( '/brown.*/i', $colour)) { echo '28'; } elseif( preg_match( '/grey.*/i', $colour)) { echo '35'; } elseif( preg_match( '/black.*/i', $colour)) { echo '38'; } elseif( preg_match( '/gold.*/i', $colour)) { echo '41'; } elseif( preg_match( '/silver.*/i', $colour)) { echo '46'; } elseif( preg_match( '/multi.*/i', $colour)) { echo '594'; } elseif( preg_match( '/beige.*/i', $colour)) { echo '6887'; } elseif( preg_match( '/nude.*/i', $colour)) { echo '6887'; } else { echo '534'; } ?>, <?=$line['colour']?>, <?=$con_size[$x]?> <br> <? // finish checking if size is available } } ?> So at the moment this is simply echoing out the product_ID into the SKU column. The code would need to enter the product_id into an array and check whether it is unique. If the product_id is unique to the array, then the product_id is echoed out unaltered, and parent is echoed out to the 'Parent/Child' column and then the product_id is repeated to the 'Parent SKU' column. However, if the array is checked and the product_id already exists in the array, then the product_id is echoed out to the 'SKU' column with a suffix i.e. _1. Then child is echoed to the 'Parent / Child' column and the original parent product_id echoed to the 'Parent SKU' column. HOWEVER - the same SKU cannot be repeated with the same suffix i.e. 12345_1, 12345_1 - so presumably there would be to be another array for the suffixed SKUs to be checked against. If anybody could help, it would be great. Thanks --- UPDATE ANSWER --- I managed to solved this myself and thought I would share my solution for future reference. /* * Array to collect product_ids and check whether unique. * If unique product_id becomes parent SKU * If not product_id becomes child of previous parent and suffixed with _1, _2 etc... */ if (!in_array($line['product_id'], $SKU)) { $SKU[] = $line['product_id']; $parent = $line['product_id']; $a = 0; ?> <? echo 'Shoes'; ?>, <? echo $parent; ?>, <? echo "Parent"; ?>, <? echo $parent; ?>, <? } else { $child = $line['product_id'] . "_" . $a; ?> <? echo 'Shoes'; ?>, <? echo $child; ?>, <? echo "Child"; ?>, <? echo $child; <? // increment suffix value for child SKU $a++; ?>

    Read the article

< Previous Page | 5 6 7 8 9