Search Results

Search found 5770 results on 231 pages for 'sense hofstede'.

Page 230/231 | < Previous Page | 226 227 228 229 230 231  | Next Page >

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • MVC2 EditorTemplate for DropDownList

    - by tschreck
    I've spent the majority of the past week knee deep in the new templating functionality baked into MVC2. I had a hard time trying to get a DropDownList template working. The biggest problem I've been working to solve is how to get the source data for the drop down list to the template. I saw a lot of examples where you can put the source data in the ViewData dictionary (ViewData["DropDownSourceValuesKey"]) then retrieve them in the template itself (var sourceValues = ViewData["DropDownSourceValuesKey"];) This works, but I did not like having a silly string as the lynch pin for making this work. Below is an approach I've come up with and wanted to get opinions on this approach: here are my design goals: The view model should contain the source data for the drop down list Limit Silly Strings Not use ViewData dictionary Controller is responsible for filling the property with the source data for the drop down list Here's my View Model: public class CustomerViewModel { [ScaffoldColumn(false)] public String CustomerCode{ get; set; } [UIHint("DropDownList")] [DropDownList(DropDownListTargetProperty = "CustomerCode"] [DisplayName("Customer Code")] public IEnumerable<SelectListItem> CustomerCodeList { get; set; } public String FirstName { get; set; } public String LastName { get; set; } public String PhoneNumber { get; set; } public String Address1 { get; set; } public String Address2 { get; set; } public String City { get; set; } public String State { get; set; } public String Zip { get; set; } } My View Model has a CustomerCode property which is a value that the user selects from a list of values. I have a CustomerCodeList property that is a list of possible CustomerCode values and is the source for a drop down list. I've created a DropDownList attribute with a DropDownListTargetProperty. DropDownListTargetProperty points to the property which will be populated based on the user selection from the generated drop down (in this case, the CustomerCode property). Notice that the CustomerCode property has [ScaffoldColumn(false)] which forces the generator to skip the field in the generated output. My DropDownList.ascx file will generate a dropdown list form element with the source data from the CustomerCodeList property. The generated dropdown list will use the value of the DropDownListTargetProperty from the DropDownList attribute as the Id and the Name attributes of the Select form element. So the generated code will look like this: <select id="CustomerCode" name="CustomerCode"> <option>... </select> This works out great because when the form is submitted, MVC will populate the target property with the selected value from the drop down list because the name of the generated dropdown list IS the target property. I kinda visualize it as the CustomerCodeList property is an extension of sorts of the CustomerCode property. I've coupled the source data to the property. Here's my code for the controller: public ActionResult Create() { //retrieve CustomerCodes from a datasource of your choosing List<CustomerCode> customerCodeList = modelService.GetCustomerCodeList(); CustomerViewModel viewModel= new CustomerViewModel(); viewModel.CustomerCodeList = customerCodeList.Select(s => new SelectListItem() { Text = s.CustomerCode, Value = s.CustomerCode, Selected = (s.CustomerCode == viewModel.CustomerCode) }).AsEnumerable(); return View(viewModel); } Here's my code for the DropDownListAttribute: namespace AutoForm.Attributes { public class DropDownListAttribute : Attribute { public String DropDownListTargetProperty { get; set; } } } Here's my code for the template (DropDownList.ascx): <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<SelectListItem>>" %> <%@ Import Namespace="AutoForm.Attributes"%> <script runat="server"> DropDownListAttribute GetDropDownListAttribute() { var dropDownListAttribute = new DropDownListAttribute(); if (ViewData.ModelMetadata.AdditionalValues.ContainsKey("DropDownListAttribute")) { dropDownListAttribute = (DropDownListAttribute)ViewData.ModelMetadata.AdditionalValues["DropDownListAttribute"]; } return dropDownListAttribute; } </script> <% DropDownListAttribute attribute = GetDropDownListAttribute();%> <select id="<%= attribute.DropDownListTargetProperty %>" name="<%= attribute.DropDownListTargetProperty %>"> <% foreach(SelectListItem item in ViewData.Model) {%> <% if (item.Selected == true) {%> <option value="<%= item.Value %>" selected="true"><%= item.Text %></option> <% } %> <% else {%> <option value="<%= item.Value %>"><%= item.Text %></option> <% } %> <% } %> </select> I tried using the Html.DropDownList helper, but it would not allow me to change the Id and Name attributes of the generated Select element. NOTE: you have to override the CreateMetadata method of the DataAnnotationsModelMetadataProvider for the DropDownListAttribute. Here's the code for that: public class MetadataProvider : DataAnnotationsModelMetadataProvider { protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { var metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName); var additionalValues = attributes.OfType<DropDownListAttribute>().FirstOrDefault(); if (additionalValues != null) { metadata.AdditionalValues.Add("DropDownListAttribute", additionalValues); } return metadata; } } Then you have to make a call to the new MetadataProvider in Application_Start of Global.asax.cs: protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ModelMetadataProviders.Current = new MetadataProvider(); } Well, I hope this makes sense and I hope this approach may save you some time. I'd like some feedback on this approach please. Is there a better approach?

    Read the article

  • JPA : optimize EJB-QL query involving large many-to-many join table

    - by Fabien
    Hi all. I'm using Hibernate Entity Manager 3.4.0.GA with Spring 2.5.6 and MySql 5.1. I have a use case where an entity called Artifact has a reflexive many-to-many relation with itself, and the join table is quite large (1 million lines). As a result, the HQL query performed by one of the methods in my DAO takes a long time. Any advice on how to optimize this and still use HQL ? Or do I have no choice but to switch to a native SQL query that would perform a join between the table ARTIFACT and the join table ARTIFACT_DEPENDENCIES ? Here is the problematic query performed in the DAO : @SuppressWarnings("unchecked") public List<Artifact> findDependentArtifacts(Artifact artifact) { Query query = em.createQuery("select a from Artifact a where :artifact in elements(a.dependencies)"); query.setParameter("artifact", artifact); List<Artifact> list = query.getResultList(); return list; } And the code for the Artifact entity : package com.acme.dependencytool.persistence.model; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.UniqueConstraint; @Entity @Table(name = "ARTIFACT", uniqueConstraints={@UniqueConstraint(columnNames={"GROUP_ID", "ARTIFACT_ID", "VERSION"})}) public class Artifact { @Id @GeneratedValue @Column(name = "ID") private Long id = null; @Column(name = "GROUP_ID", length = 255, nullable = false) private String groupId; @Column(name = "ARTIFACT_ID", length = 255, nullable = false) private String artifactId; @Column(name = "VERSION", length = 255, nullable = false) private String version; @ManyToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER) @JoinTable( name="ARTIFACT_DEPENDENCIES", joinColumns = @JoinColumn(name="ARTIFACT_ID", referencedColumnName="ID"), inverseJoinColumns = @JoinColumn(name="DEPENDENCY_ID", referencedColumnName="ID") ) private List<Artifact> dependencies = new ArrayList<Artifact>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getGroupId() { return groupId; } public void setGroupId(String groupId) { this.groupId = groupId; } public String getArtifactId() { return artifactId; } public void setArtifactId(String artifactId) { this.artifactId = artifactId; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } public List<Artifact> getDependencies() { return dependencies; } public void setDependencies(List<Artifact> dependencies) { this.dependencies = dependencies; } } Thanks in advance. EDIT 1 : The DDLs are generated automatically by Hibernate EntityMananger based on the JPA annotations in the Artifact entity. I have no explicit control on the automaticaly-generated join table, and the JPA annotations don't let me explicitly set an index on a column of a table that does not correspond to an actual Entity (in the JPA sense). So I guess the indexing of table ARTIFACT_DEPENDENCIES is left to the DB, MySQL in my case, which apparently uses a composite index based on both clumns but doesn't index the column that is most relevant in my query (DEPENDENCY_ID). mysql describe ARTIFACT_DEPENDENCIES; +---------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------+------+-----+---------+-------+ | ARTIFACT_ID | bigint(20) | NO | MUL | NULL | | | DEPENDENCY_ID | bigint(20) | NO | MUL | NULL | | +---------------+------------+------+-----+---------+-------+ EDIT 2 : When turning on showSql in the Hibernate session, I see many occurences of the same type of SQL query, as below : select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=? Here's what EXPLAIN in MySql says about this type of query : mysql explain select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=1; +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | 1 | SIMPLE | dependenci0_ | ref | FKEA2DE763364D466 | FKEA2DE763364D466 | 8 | const | 159 | | | 1 | SIMPLE | artifact1_ | eq_ref | PRIMARY | PRIMARY | 8 | dependencytooldb.dependenci0_.DEPENDENCY_ID | 1 | | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ EDIT 3 : I tried setting the FetchType to LAZY in the JoinTable annotation, but I then get the following exception : Hibernate: select artifact0_.ID as ID1_, artifact0_.ARTIFACT_ID as ARTIFACT2_1_, artifact0_.GROUP_ID as GROUP3_1_, artifact0_.VERSION as VERSION1_ from ARTIFACT artifact0_ where artifact0_.GROUP_ID=? and artifact0_.ARTIFACT_ID=? 51545 [btpool0-2] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:93) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:109) at com.acme.dependencytool.server.DependencyToolServiceImpl.search(DependencyToolServiceImpl.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:527) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:166) at com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:86) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • Large Object Heap Fragmentation

    - by Paul Ruane
    The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before. The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH: 0:000> !DumpHeap 000000005b5b1000 000000006351da10 Address MT Size ... 000000005d4f92e0 0000064280c7c970 16147872 000000005e45f880 00000000001661d0 1901752 Free 000000005e62fd38 00000642788d8ba8 1056 <-- 000000005e630158 00000000001661d0 5988848 Free 000000005ebe6348 00000642788d8ba8 1056 000000005ebe6768 00000000001661d0 6481336 Free 000000005f214d20 00000642788d8ba8 1056 000000005f215140 00000000001661d0 7346016 Free 000000005f9168a0 00000642788d8ba8 1056 000000005f916cc0 00000000001661d0 7611648 Free 00000000600591c0 00000642788d8ba8 1056 00000000600595e0 00000000001661d0 264808 Free ... Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow. Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine: 0:015> !DumpObj 000000005e62fd38 Name: System.Object[] MethodTable: 00000642788d8ba8 EEClass: 00000642789d7660 Size: 1056(0x420) bytes Array: Rank 1, Number of elements 128, Type CLASS Element Type: System.Object Fields: None The elements of the object array are all strings and the strings are recognisable as from our application code. Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight). So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects? Thanks in advance. Update 1 Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead. The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know... Update 2 Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this: static void Main() { const int ITERATIONS = 100000; for (int index = 0; index < ITERATIONS; ++index) { string str = "NonInterned" + index; Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue."); Console.In.ReadLine(); for (int index = 0; index < ITERATIONS; ++index) { string str = string.Intern("Interned" + index); Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue?"); Console.In.ReadLine(); } The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not. In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS: 0:000> .loadby sos mscorwks 0:000> !EEHeap -gc Number of GC Heaps: 1 generation 0 starts at 0x00f7a9b0 generation 1 starts at 0x00e79c3c generation 2 starts at 0x00b21000 ephemeral segment allocation context: none segment begin allocated size 00b20000 00b21000 010029bc 0x004e19bc(5118396) Large object heap starts at 0x01b21000 segment begin allocated size 01b20000 01b21000 01b8ade0 0x00069de0(433632) Total Size 0x54b79c(5552028) ------------------------------ GC Heap Size 0x54b79c(5552028) Taking a dump of the LOH segment reveals the pattern I saw in the leaking application: 0:000> !DumpHeap 01b21000 01b8ade0 ... 01b8a120 793040bc 528 01b8a330 00175e88 16 Free 01b8a340 793040bc 528 01b8a550 00175e88 16 Free 01b8a560 793040bc 528 01b8a770 00175e88 16 Free 01b8a780 793040bc 528 01b8a990 00175e88 16 Free 01b8a9a0 793040bc 528 01b8abb0 00175e88 16 Free 01b8abc0 793040bc 528 01b8add0 00175e88 16 Free total 1568 objects Statistics: MT Count TotalSize Class Name 00175e88 784 12544 Free 793040bc 784 421088 System.Object[] Total 1568 objects Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long. So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR. In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

    Read the article

  • MVC 2 Entity Framework View Model Insert

    - by cannibalcorpse
    This is driving me crazy. Hopefully my question makes sense... I'm using MVC 2 and Entity Framework 1 and am trying to insert a new record with two navigation properties. I have a SQL table, Categories, that has a lookup table CategoryTypes and another self-referencing lookup CategoryParent. EF makes two nav properties on my Category model, one called Parent and another called CategoryType, both instances of their respective models. On my view that creates the new Category, I have two dropdowns, one for the CategoryType and another for the ParentCategory. When I try and insert the new Category WITHOUT the ParentCategory, which allows nulls, everything is fine. As soon as I add the ParentCategory, the insert fails, and oddly (or so I think) complains about the CategoryType in the form of this error: 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. When I step through, I can verifiy that both ID properties coming in on the action method parameter are correct. I can also verify that when I go to the db to get the CategoryType and ParentCategory with the ID's, the records are being pulled fine. Yet it fails on SaveChanges(). All that I can see is that my CategoryParent dropdownlistfor in my view, is somehow causing the insert to bomb. Please see my comments in my httpPost Create action method. My view model looks like this: public class EditModel { public Category MainCategory { get; set; } public IEnumerable<CategoryType> CategoryTypesList { get; set; } public IEnumerable<Category> ParentCategoriesList { get; set; } } My Create action methods look like this: // GET: /Categories/Create public ActionResult Create() { return View(new EditModel() { CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); } // POST: /Categories/Create [HttpPost] public ActionResult Create(Category mainCategory) { if (!ModelState.IsValid) return View(new EditModel() { MainCategory = mainCategory, CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); mainCategory.CategoryType = _db.CategoryTypeSet.First(ct => ct.Id == mainCategory.CategoryType.Id); // This db call DOES get the correct Category, but fails on _db.SaveChanges(). // Oddly the error is related to CategoryTypes and not Category. // Entities in 'DbEntities.CategorySet' participate in the 'FK_Categories_CategoryTypes' relationship. // 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. //mainCategory.Parent = _db.CategorySet.First(c => c.Id == mainCategory.Parent.Id); // If I just use the literal ID of the same Category, // AND comment out the CategoryParent dropdownlistfor in the view, all is fine. mainCategory.Parent = _db.CategorySet.First(c => c.Id == 2); _db.AddToCategorySet(mainCategory); _db.SaveChanges(); return RedirectToAction("Index"); } Here is my Create form on the view : <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div> <%= Html.LabelFor(model => model.MainCategory.Parent.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.Parent.Id, new SelectList(Model.ParentCategoriesList, "Id", "Name")) %> <%= Html.ValidationMessageFor(model => model.MainCategory.Parent.Id) %> </div> <div> <%= Html.LabelFor(model => model.MainCategory.CategoryType.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.CategoryType.Id, new SelectList(Model.CategoryTypesList, "Id", "Name"))%> <%= Html.ValidationMessageFor(model => model.MainCategory.CategoryType.Id)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Name) %> <%= Html.TextBoxFor(model => model.MainCategory.Name)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Name)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Description)%> <%= Html.TextAreaFor(model => model.MainCategory.Description)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Description)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.SeoName)%> <%= Html.TextBoxFor(model => model.MainCategory.SeoName, new { @class = "large" })%> <%= Html.ValidationMessageFor(model => model.MainCategory.SeoName)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.HasHomepage)%> <%= Html.CheckBoxFor(model => model.MainCategory.HasHomepage)%> <%= Html.ValidationMessageFor(model => model.MainCategory.HasHomepage)%> </div> <p><input type="submit" value="Create" /></p> </fieldset> <% } %> Maybe I've just been staying up too late playing with MVC 2? :) Please let me know if I'm not being clear enough.

    Read the article

  • Returning new object, overwrite the existing one in Java

    - by lupin
    Note: This is an assignment. Hi, Ok I have this method that will create a supposedly union of 2 sets. i mport java.io.*; class Set { public int numberOfElements; public String[] setElements; public int maxNumberOfElements; // constructor for our Set class public Set(int numberOfE, int setE, int maxNumberOfE) { this.numberOfElements = numberOfE; this.setElements = new String[setE]; this.maxNumberOfElements = maxNumberOfE; } // Helper method to shorten/remove element of array since we're using basic array instead of ArrayList or HashSet from collection interface :( static String[] removeAt(int k, String[] arr) { final int L = arr.length; String[] ret = new String[L - 1]; System.arraycopy(arr, 0, ret, 0, k); System.arraycopy(arr, k + 1, ret, k, L - k - 1); return ret; } int findElement(String element) { int retval = 0; for ( int i = 0; i < setElements.length; i++) { if ( setElements[i] != null && setElements[i].equals(element) ) { return retval = i; } retval = -1; } return retval; } void add(String newValue) { int elem = findElement(newValue); if( numberOfElements < maxNumberOfElements && elem == -1 ) { setElements[numberOfElements] = newValue; numberOfElements++; } } int getLength() { if ( setElements != null ) { return setElements.length; } else { return 0; } } String[] emptySet() { setElements = new String[0]; return setElements; } Boolean isFull() { Boolean True = new Boolean(true); Boolean False = new Boolean(false); if ( setElements.length == maxNumberOfElements ){ return True; } else { return False; } } Boolean isEmpty() { Boolean True = new Boolean(true); Boolean False = new Boolean(false); if ( setElements.length == 0 ) { return True; } else { return False; } } void remove(String newValue) { for ( int i = 0; i < setElements.length; i++) { if ( setElements[i] != null && setElements[i].equals(newValue) ) { setElements = removeAt(i,setElements); } } } int isAMember(String element) { int retval = -1; for ( int i = 0; i < setElements.length; i++ ) { if (setElements[i] != null && setElements[i].equals(element)) { return retval = i; } } return retval; } void printSet() { for ( int i = 0; i < setElements.length; i++) { if (setElements[i] != null) { System.out.println("Member elements on index: "+ i +" " + setElements[i]); } } } String[] getMember() { String[] tempArray = new String[setElements.length]; for ( int i = 0; i < setElements.length; i++) { if(setElements[i] != null) { tempArray[i] = setElements[i]; } } return tempArray; } Set union(Set x, Set y) { String[] newXtemparray = new String[x.getLength()]; String[] newYtemparray = new String[y.getLength()]; int len = newYtemparray.length + newXtemparray.length; Set temp = new Set(0,len,len); newXtemparray = x.getMember(); newYtemparray = x.getMember(); for(int i = 0; i < newYtemparray.length; i++) { temp.add(newYtemparray[i]); } for(int j = 0; j < newXtemparray.length; j++) { temp.add(newXtemparray[j]); } return temp; } Set difference(Set x, Set y) { String[] newXtemparray = new String[x.getLength()]; String[] newYtemparray = new String[y.getLength()]; int len = newYtemparray.length + newXtemparray.length; Set temp = new Set(0,len,len); newXtemparray = x.getMember(); newYtemparray = x.getMember(); for(int i = 0; i < newXtemparray.length; i++) { temp.add(newYtemparray[i]); } for(int j = 0; j < newYtemparray.length; j++) { int retval = temp.findElement(newYtemparray[j]); if( retval != -1 ) { temp.remove(newYtemparray[j]); } } return temp; } } // This is the SetDemo class that will make use of our Set class class SetDemo { public static void main(String[] args) { //get input from keyboard BufferedReader keyboard; InputStreamReader reader; String temp = ""; reader = new InputStreamReader(System.in); keyboard = new BufferedReader(reader); try { System.out.println("Enter string element to be added" ); temp = keyboard.readLine( ); System.out.println("You entered " + temp ); } catch (IOException IOerr) { System.out.println("There was an error during input"); } /* ************************************************************************** * Test cases for our new created Set class. * ************************************************************************** */ Set setA = new Set(0,10,10); setA.add(temp); setA.add("b"); setA.add("b"); setA.add("hello"); setA.add("world"); setA.add("six"); setA.add("seven"); setA.add("b"); int size = setA.getLength(); System.out.println("Set size is: " + size ); Boolean isempty = setA.isEmpty(); System.out.println("Set is empty? " + isempty ); int ismember = setA.isAMember("sixb"); System.out.println("Element sixb is member of setA? " + ismember ); Boolean output = setA.isFull(); System.out.println("Set is full? " + output ); //setA.printSet(); int index = setA.findElement("world"); System.out.println("Element b located on index: " + index ); setA.remove("b"); //setA.emptySet(); int resize = setA.getLength(); System.out.println("Set size is: " + resize ); //setA.printSet(); Set setB = new Set(0,10,10); setB.add("b"); setB.add("z"); setB.add("x"); setB.add("y"); Set setC = setA.union(setB,setA); System.out.println("Elements of setA"); setA.printSet(); System.out.println("Union of setA and setB"); setC.printSet(); } } The union method works a sense that somehow I can call another method on it but it doesn't do the job, i supposedly would create and union of all elements of setA and setB but it only return element of setB. Sample output follows: java SetDemo Enter string element to be added hello You entered hello Set size is: 10 Set is empty? false Element sixb is member of setA? -1 Set is full? true Element b located on index: 2 Set size is: 9 Elements of setA Member elements on index: 0 hello Member elements on index: 1 world Member elements on index: 2 six Member elements on index: 3 seven Union of setA and setB Member elements on index: 0 b Member elements on index: 1 z Member elements on index: 2 x Member elements on index: 3 y thanks, lupin

    Read the article

  • Save object states in .data or attr - Performance vs CSS?

    - by Neysor
    In response to my answer yesterday about rotating an Image, Jamund told me to use .data() instead of .attr() First I thought that he is right, but then I thought about a bigger context... Is it always better to use .data() instead of .attr()? I looked in some other posts like what-is-better-data-or-attr or jquery-data-vs-attrdata The answers were not satisfactory for me... So I moved on and edited the example by adding CSS. I thought it might be useful to make a different Style on each image if it rotates. My style was the following: .rp[data-rotate="0"] { border:10px solid #FF0000; } .rp[data-rotate="90"] { border:10px solid #00FF00; } .rp[data-rotate="180"] { border:10px solid #0000FF; } .rp[data-rotate="270"] { border:10px solid #00FF00; } Because design and coding are often separated, it could be a nice feature to handle this in CSS instead of adding this functionality into JavaScript. Also in my case the data-rotate is like a special state which the image currently has. So in my opinion it make sense to represent it within the DOM. I also thought this could be a case where it is much better to save with .attr() then with .data(). Never mentioned before in one of the posts I read. But then i thought about performance. Which function is faster? I built my own test following: <!DOCTYPE HTML> <html> <head> <title>test</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript"> function runfirst(dobj,dname){ console.log("runfirst "+dname); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); } function runlast(dobj,dname){ console.log("runlast "+dname); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); } $().ready(function() { runfirst($("#rp4"),"#rp4"); runfirst($("#rp3"),"#rp3"); runlast($("#rp2"),"#rp2"); runlast($("#rp1"),"#rp1"); }); </script> </head> <body> <div id="rp1">Testdiv 1</div> <div id="rp2" data-test="1">Testdiv 2</div> <div id="rp3">Testdiv 3</div> <div id="rp4" data-test="1">Testdiv 4</div> </body> </html> It should also show if there is a difference with a predefined data-test or not. One result was this: runfirst #rp4 #rp4-attr: 515ms #rp4-data: 268ms runfirst #rp3 #rp3-attr: 505ms #rp3-data: 264ms runlast #rp2 #rp2-data: 260ms #rp2-attr: 521ms runlast #rp1 #rp1-data: 284ms #rp1-attr: 525ms So the .attr() function did always need more time than the .data() function. This is an argument for .data() I thought. Because performance is always an argument! Then I wanted to post my results here with some questions, and in the act of writing I compared with the questions Stack Overflow showed me (similar titles) And true enough, there was one interesting post about performance I read it and run their example. And now I am confused! This test showed that .data() is slower then .attr() !?!! Why is that so? First I thought it is because of a different jQuery library so I edited it and saved the new one. But the result wasn't changing... So now my questions to you: Why are there some differences in the performance in these two examples? Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Now depending on the performance: Would performance be an argument for you using .attr() instead of data, if it shows that .attr() is better? Although data is meant to be used for .data()? UPDATE 1: I did see that without overhead .data() is much faster. Misinterpreted the data :) But I'm more interested in my second question. :) Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Are there some other reasons you can think of, to use .attr() and not .data()? e.g. interoperability? because .data() is jquery style and HTML Attributes can be read by all... UPDATE 2: As we see from T.J Crowder's speed test in his answer attr is much faster then data! which is again confusing me :) But please! Performance is an argument, but not the highest! So give answers to my other questions please too!

    Read the article

  • PHP - My array returns NULL values when placed in a function, but works fine outside of the function

    - by orbit82
    Okay, let me see if I can explain this. I am making a newspaper WordPress theme. The theme pulls posts from categories. The front page shows multiple categories, organized as "newsboxes". Each post should show up only ONCE on the front page, even if said post is in two or more categories. To prevent posts from duplicating on the front page, I've created an array that keeps track of the individual post IDs. When a post FIRST shows up on the front page, its ID gets added to the array. Before looping through the posts for each category, the code first checks the array to see which posts have ALREADY been displayed. OK, so now remember how I said earlier that the front page shows multiple categories organized as "newsboxes"? Well, these newsboxes are called onto the front page using PHP includes. I have 6 newsboxes appearing on the front page, and the code to call them is EXACTLY the same. I didn't want to repeat the same code 6 times, so I put all of the inclusion code into a function. The function works, but the only problem is that it screws up the duplicate posts code I mentioned earlier. The posts all repeat. Running a var_dump on the $do_not_duplicate variable returns an array with null indices. Everything works PERFECTLY if I don't put the code inside a function, but once I do put them in a function it's like the arrays aren't even connecting with the posts. Here is the code with the arrays. The key variables in question here include $do_not_duplicate[] = $post-ID, $do_not_duplicate and 'post__not_in' = $do_not_duplicate <?php query_posts('cat='.$settings['cpress_top_story_category'].'&posts_per_page='.$settings['cpress_number_of_top_stories'].'');?> <?php if (have_posts()) : ?> <!--TOP STORY--> <div id="topStory"> <?php while ( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_post_thumbnail('top-story-thumbnail'); ?></a> <h2 class="extraLargeHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <div class="topStory_author"><?php cpress_show_post_author_byline(); ?></div> <div <?php post_class('topStory_entry') ?> id="post-<?php the_ID(); ?>"> <?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><!--/topStoryentry--> <div class="topStory_meta"><?php cpress_show_post_meta(); ?></div> <?php endwhile; wp_reset_query(); ?> <?php if(!$settings['cpress_hide_top_story_more_stories']) { ?> <!--More Top Stories--><div id="moreTopStories"> <?php $category_link = get_category_link(''.$settings['cpress_top_story_category'].''); ?> <?php if (have_posts()) : ?> <?php query_posts( array( 'cat' => ''.$settings['cpress_top_story_category'].'', 'posts_per_page' => ''.$settings['cpress_number_of_more_top_stories'].'', 'post__not_in' => $do_not_duplicate ) ); ?> <h4 class="moreStories"> <?php if($settings['cpress_make_top_story_more_stories_link']) { ?> <a href="<?php echo $category_link; ?>" title="<?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?>"><?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?></a><?php } else { echo strip_tags($settings['cpress_top_story_more_stories_text']); } ?> </h4> <ul> <?php while( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <li><h2 class="mediumHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <?php if(!$settings['cpress_hide_more_top_stories_excerpt']) { ?> <div <?php post_class('moreTopStory_postExcerpt') ?> id="post-<?php the_ID(); ?>"><?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?> <a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><?php } ?> <div class="moreTopStory_postMeta"><?php cpress_show_post_meta(); ?></div> </li> <?php endwhile; wp_reset_query(); ?> </ul> <?php endif;?> </div><!--/moreTopStories--> <?php } ?> <?php echo(var_dump($do_not_duplicate)); ?> </div><!--/TOP STORY--> <?php endif; ?> And here is the code that includes the newsboxes onto the front page. This is the code I'm trying to put into a function to avoid duplicating it 6 times on one page. function cpress_show_templatebitsf($tbit_num, $tbit_option) { global $tbit_path; global $shortname; $settings = get_option($shortname.'_options'); //display the templatebits (usually these will be sidebars) for ($i=1; $i<=$tbit_num; $i++) { $tbit = strip_tags($settings[$tbit_option .$i]); if($tbit !="") { include_once(TEMPLATEPATH . $tbit_path. $tbit.'.php'); } //if }//for loop unset($tbit_option); } I hope this makes sense. It's kind of a complex thing to explain but I've tried many things to fix it and have had no luck. I'm stumped. I'm hoping it's just some little thing I'm overlooking because it seems like it shouldn't be such a problem.

    Read the article

  • iOS bluetooth low energy not detecting peripherals

    - by user3712524
    My app won't detect peripherals. Im using light blue to simulate a bluetooth low energy peripheral and my app just won't sense it. I even installed light blue on two devices to make sure it was generating a peripheral signal properly and it is. Any suggestions? My labels are updating and the NSLog is showing that the scanning is starting. Thanks in advance. #import <UIKit/UIKit.h> #import <CoreBluetooth/CoreBluetooth.h> @interface ViewController : UIViewController @property (weak, nonatomic) IBOutlet UITextField *navDestination; @end #import "ViewController.h" @implementation ViewController - (IBAction)connect:(id)sender { } - (IBAction)navDestination:(id)sender { NSString *destinationText = self.navDestination.text; } - (void)viewDidLoad { } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end #import <UIKit/UIKit.h> #import "ViewController.h" @interface BlueToothViewController : UIViewController @property (strong, nonatomic) CBCentralManager *centralManager; @property (strong, nonatomic) CBPeripheral *discoveredPerepheral; @property (strong, nonatomic) NSMutableData *data; @property (strong, nonatomic) IBOutlet UITextView *textview; @property (weak, nonatomic) IBOutlet UILabel *charLabel; @property (weak, nonatomic) IBOutlet UILabel *isConnected; @property (weak, nonatomic) IBOutlet UILabel *myPeripherals; @property (weak, nonatomic) IBOutlet UILabel *aLabel; - (void)centralManagerDidUpdateState:(CBCentralManager *)central; - (void)centralManger:(CBCentralManager *)central didDiscoverPeripheral: (CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI; -(void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error; -(void)cleanup; -(void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral; -(void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error; -(void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didUpdateNotificationStateForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error; @end @interface BlueToothViewController () @end @implementation BlueToothViewController - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { _centralManager = [[CBCentralManager alloc]initWithDelegate:self queue:nil options:nil]; _data = [[NSMutableData alloc]init]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [_centralManager stopScan]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)centralManagerDidUpdateState:(CBCentralManager *)central { //you should test all scenarios if (central.state == CBCentralManagerStateUnknown) { self.aLabel.text = @"I dont do anything because my state is unknown."; return; } if (central.state == CBCentralManagerStatePoweredOn) { //scan for devices [_centralManager scanForPeripheralsWithServices:nil options:@{ CBCentralManagerScanOptionAllowDuplicatesKey : @YES }]; NSLog(@"Scanning Started"); } if (central.state == CBCentralManagerStateResetting) { self.aLabel.text = @"I dont do anything because my state is resetting."; return; } if (central.state == CBCentralManagerStateUnsupported) { self.aLabel.text = @"I dont do anything because my state is unsupported."; return; } if (central.state == CBCentralManagerStateUnauthorized) { self.aLabel.text = @"I dont do anything because my state is unauthorized."; return; } if (central.state == CBCentralManagerStatePoweredOff) { self.aLabel.text = @"I dont do anything because my state is powered off."; return; } } - (void)centralManger:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI { NSLog(@"Discovered %@ at %@", peripheral.name, RSSI); self.myPeripherals.text = [NSString stringWithFormat:@"%@%@",peripheral.name, RSSI]; if (_discoveredPerepheral != peripheral) { //save a copy of the peripheral _discoveredPerepheral = peripheral; //and connect NSLog(@"Connecting to peripheral %@", peripheral); [_centralManager connectPeripheral:peripheral options:nil]; self.aLabel.text = [NSString stringWithFormat:@"%@", peripheral]; } } -(void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error { NSLog(@"Failed to connect"); [self cleanup]; } -(void)cleanup { //see if we are subscribed to a characteristic on the peripheral if (_discoveredPerepheral.services != nil) { for (CBService *service in _discoveredPerepheral.services) { if (service.characteristics != nil) { for (CBCharacteristic *characteristic in service.characteristics) { if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:@"508EFF8E-F541-57EF-BD82-B0B4EC504CA9"]]) { if (characteristic.isNotifying) { [_discoveredPerepheral setNotifyValue:NO forCharacteristic:characteristic]; return; } } } } } } [_centralManager cancelPeripheralConnection:_discoveredPerepheral]; } -(void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral { NSLog(@"Connected"); [_centralManager stopScan]; NSLog(@"Scanning stopped"); self.isConnected.text = [NSString stringWithFormat:@"Connected"]; [_data setLength:0]; peripheral.delegate = self; [peripheral discoverServices:nil]; } -(void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error { if (error) { [self cleanup]; return; } for (CBService *service in peripheral.services) { [peripheral discoverCharacteristics:nil forService:service]; } //discover other characteristics } -(void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error { if (error) { [self cleanup]; return; } for (CBCharacteristic *characteristic in service.characteristics) { [peripheral setNotifyValue:YES forCharacteristic:characteristic]; } } -(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error { if (error) { NSLog(@"Error"); return; } NSString *stringFromData = [[NSString alloc]initWithData:characteristic.value encoding:NSUTF8StringEncoding]; self.charLabel.text = [NSString stringWithFormat:@"%@", stringFromData]; //Have we got everything we need? if ([stringFromData isEqualToString:@"EOM"]) { [_textview setText:[[NSString alloc]initWithData:self.data encoding:NSUTF8StringEncoding]]; [peripheral setNotifyValue:NO forCharacteristic:characteristic]; [_centralManager cancelPeripheralConnection:peripheral]; } } -(void)peripheral:(CBPeripheral *)peripheral didUpdateNotificationStateForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error { if ([characteristic.UUID isEqual:nil]) { return; } if (characteristic.isNotifying) { NSLog(@"Notification began on %@", characteristic); } else { [_centralManager cancelPeripheralConnection:peripheral]; } } -(void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error { _discoveredPerepheral = nil; self.isConnected.text = [NSString stringWithFormat:@"Connecting..."]; [_centralManager scanForPeripheralsWithServices:nil options:@{ CBCentralManagerScanOptionAllowDuplicatesKey : @YES}]; } @end

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Java MVC project - either I can't update the drawing, or I can't see it

    - by user1881164
    I've got a project based around the Model-View-Controller paradigm, and I've been having a lot of trouble with getting it to work properly. The program has 4 panels, which are supposed to allow me to modify an oval drawn on the screen in various ways. These seem to work fine, and after considerable trouble I was able to get them to display in the JFrame which holds the whole shebang. I've managed to get them to display by breaking away from the provided instructions, but when I do that, I can't seem to get the oval to update. However, if I follow the directions to the letter, I only ever see an empty frame. The project had pretty specific directions, which I followed up to a point, but some of the documentation was unclear. I think what I'm missing must be something simple, since nothing is jumping out at me as not making sense. I have to admit though that my Java experience is limited and my experience with GUI design/paradigms is even more so. Anyway, I've been searching the web and this site extensively trying to figure out what's wrong, but this is a somewhat specific example and honestly I just don't know enough about this to generalize any of the answers I've found online and figure out what's missing. I've been poring over this code for far too long now so I'm really hoping someone can help me out. public class Model { private Controller controller; private View view; private MvcFrame mvcFrame; private int radius = 44; private Color color = Color.BLUE; private boolean solid = true; //bunch of mutators and accessors for the above variables public Model() { controller = new Controller(this); view = new View(this); mvcFrame = new MvcFrame(this); } } Here's the model class. This seems to be fairly simple. I think my understanding of what's going on here is solid, and nothing seems to be wrong. Included mostly for context. public class Controller extends JPanel{ private Model model; public Controller(Model model) { this.model = model; setBorder(BorderFactory.createLineBorder(Color.GREEN)); setLayout(new GridLayout(4,1)); add(new RadiusPanel(model)); add(new ColorPanel(model)); add(new SolidPanel(model)); add(new TitlePanel(model)); } } This is the Controller class. As far as I can tell, the setBorder, setLayout, and series of adds do nothing here. I had them commented out, but this is the way that the instructions told me to do things, so either there's a mistake there or something about my setup is wrong. However, when I did it this way, I would get an empty window (JFrame) but none of the panels would show up in it. What I did to fix this is put those add functions in the mvcFrame class: public class MvcFrame extends JFrame { private Model model; public MvcFrame(Model model){ this.model = model; //setLayout(new GridLayout(4,1)); //add(new RadiusPanel(model)); //add(new ColorPanel(model)); //add(new SolidPanel(model)); //add(new TitlePanel(model)); //add(new View(model)); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLocationRelativeTo(null); setSize(800,600); setVisible(true); } } So here's where things kind of started getting weird. The first block of commented out code is the same as what's in the Controller class. The reason I have it commented out is because that was just a lucky guess - it's not supposed to be like that according to the instructions. However, this did work for getting the panels to show up - but at that point I was still tearing my hair out trying to get the oval to display. The other commented line ( add(new View(model)); ) was a different attempt at making things work. In this case, I put those add functions in the View class (see commented out code below). This actually worked to display both the oval and the panels, but that method wouldn't allow me to update the oval. Also, though I just had the oval displaying, I can't seem to figure out what exactly made that happen, and I can't seem to make it come back. public class View extends JPanel{ private Model model; public View(Model model) { this.model = model; //setLayout(new GridLayout(4,1)); //add(new RadiusPanel(model)); //add(new ColorPanel(model)); //add(new SolidPanel(model)); //add(new TitlePanel(model)); repaint(); } @Override protected void paintComponent(Graphics g){ super.paintComponent(g); //center of view panel, in pixels: int xCenter = getWidth()/2; int yCenter = getHeight()/2; int radius = model.getRadius(); int xStart = xCenter - radius; int yStart = yCenter - radius; int xWidth = 2 * radius; int yHeight = 2 * radius; g.setColor(model.getColor()); g.clearRect(0, 0, getWidth(), getHeight()); if (model.isSolid()){ g.fillOval(xStart, yStart, xWidth, yHeight); } else { g.drawOval(xStart, yStart, xWidth, yHeight); } } } Kinda same idea as before - the commented out code is stuff I added to try to get things working, but is not based on the provided directions. In the case where that stuff was uncommented, I had the add(new View(model)); line from the mvcFrame line uncommented as well. The various panel classes (SolidPanel, ColorPanel, etc) simply extend a class called ControlPanel which extends JPanel. These all seem to work as expected, not having much issue with them. There is also a driver which launches the GUI. This also seems to work as expected. The main problem I'm having is that I can't get the oval to show up, and the one time I could make it show up, none of the options for changing it seemed to work. I feel like I'm close but I'm just at a loss for other things to try out at this point. Anyone who can help will have my sincerest gratitude.

    Read the article

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • Array structure returned by Yii's model

    - by user1104955
    I am a Yii beginner and am running into a bit of a wall and hope someone will be able to help me get back onto track. I think this might be a fairly straight forward question to the seasoned Yii user. So here goes... In the controller, let's say I run the following call to the model- $variable = Post::model()->findAll(); All works fine and I pass the variable into the view. Here's where I get pretty stuck. The array that is returned in the above query is far more complex than I anticipated and I'm struggling to make sense of it. Here's a sample- print_r($variable); gives- Array ( [0] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object ( [tableSchema] => CMysqlTableSchema Object ( [schemaName] => [name] => tbl_post [rawName] => `tbl_post` [primaryKey] => id [sequenceName] => [foreignKeys] => Array ( ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_e:CComponent:private] => [_m:CComponent:private] => ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [relations] => Array ( [responses] => CHasManyRelation Object ( [limit] => -1 [offset] => -1 [index] => [through] => [joinType] => LEFT OUTER JOIN [on] => [alias] => [with] => Array ( ) [together] => [scopes] => [name] => responses [className] => Response [foreignKey] => post_id [select] => * [condition] => [params] => Array ( ) [group] => [join] => [having] => [order] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [attributeDefaults] => Array ( ) [_model:CActiveRecordMetaData:private] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object *RECURSION* [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( [id] => 1 [post] => User Post ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => 1 [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => update [_e:CComponent:private] => [_m:CComponent:private] => ) ) [sorry if there's an easier way to show this array, I'm not aware of it] Can anyone explain to me why the model returns such a complex array? It doesn't seem to matter what tables or columns or relations are used in your application, they all seem to me to return this format. Also, can someone explain the structure to me so that I can isolate the variables that I want to recover? Many thanks in advance, Nick

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • You Might Be a DBA

    - by BuckWoody
    With all apologies to Jeff Foxworthy, I was up late Friday night on a holiday weekend (which translated into T-SQL becomes “Maintenance Window”) and I got bored in between the two or three minutes I had between clicks. So I started a “Twitter” meme – and it just took off. I haven’t cleaned these up much, but here, in author order as of Saturday the 29th of May is the list “You might be a DBA” from around the Twitterverse: buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have children older than the rest of your team #youmightbeaDBA  buckwoody You and the Oracle DBA would kill each other, but you'll happily fight off a developer together first #youmightbeaDBA  buckwoody You've threatened to quit if they give anyone the sa password on production #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You refer to co-workers as "carbon-units" #youmightbeaDBA  buckwoody Being paranoid is on your resume at the top #youmightbeaDBA  buckwoody Everyone comes to your cube to find the MSDN DVD's #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You've worn down developers to get your way by explaining normalization levels #youmightbeaDBA  buckwoody You refer to clothes as "Data Abstractions" #youmightbeaDBA  buckwoody Users pester you to be able to put data in a database, then they pester you to take it out and put it in Excel #youmightbeaDBA  buckwoody Others try to de-duplicate data, you try to copy it to more than three locations #youmightbeaDBA  buckwoody You have at least one DLT tape in the trunk of your car #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody Your spouse knows what "ETL" means #youmightbeaDBA  buckwoody You've referred to yourself as the "Data Janitor" #youmightbeaDBA  buckwoody You don't have positive connotations of the word "upgrade" #youmightbeaDBA  buckwoody You get your coffee before you check your servers, because you know you won't get any if you don't #youmightbeaDBA  buckwoody You always come to work through the back door so no one hijacks you on the way to your cube #youmightbeaDBA  buckwoody You check your server logs before you check your e-mail in the morning so you can reply "Yeah, I already fixed that." #youmightbeaDBA  buckwoody You have more conference badges than clean socks #youmightbeaDBA  buckwoody Your coffee mug says "It depends" #youmightbeaDBA  buckwoody You can convince a boss that you need 16GB of RAM in your laptop #youmightbeaDBA  buckwoody You've used ebay to find production equipment #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You know when your company is acquiring another even before the CFO #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You call aspirin "work vitamins" #youmightbeaDBA  buckwoody You get the same amount of sleep even after you have a child #youmightbeaDBA  buckwoody You obsess about performance metrics from over one year ago #youmightbeaDBA  buckwoody The first thing you buy after the database software is aftermarket tools to manage the database software #youmightbeaDBA  buckwoody You've tried to convince someone else to become a DBA #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody You only know other DBA's by their Tweet Handle #youmightbeaDBA  buckwoody You've explained the difference between 32 and 64-bit to more than one manager in terms they can understand, using puppets #youmightbeaDBA  buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody You've driven to the Datacenter to install SQL Server because "you don't trust those NOC admins" #youmightbeaDBA  buckwoody You pay more for faster Internet connections than cable at home so you don't have to drive in #youmightbeaDBA  buckwoody You call texting a "queuing system" #youmightbeaDBA  buckwoody You know that if someone can read Perl, they manage an Oracle system #youmightbeaDBA  buckwoody You have an e-mail rule for backup notifications #youmightbeaDBA  buckwoody Your food pyramid includes coffee, salt and fat #youmightbeaDBA  buckwoody You wish everything had a graphical query plan #youmightbeaDBA  buckwoody You refactor your e-mails #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody You've written blog posts on technology you've never actually implemented in production #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody @MidnightDBA Click the #youmightbeaDBA tag. I've had WAY too much coffee today.  buckwoody There is no other position that is 1-deep except you and the CEO #youmightbeaDBA  buckwoody When you watch "The Office" you call it "OJT" #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody Your blog would make a "best practices" or "worst practices" book #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody The first thing you install on your netbook is SSMS #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody Your watch is set to UTC because it's just easier #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody There are just as many females in your technical field as males #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody You refer to time as a "Clustered Index" #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You think a millisecond is a really long time #youmightbeaDBA  buckwoody You're sitting and typing #youmightbeaDBA when you could be outside #youmightbeaDBA  buckwoody You can't wait for a technical conference so you can wear a kilt - and you're not Scottish #youmightbeaDBA  buckwoody You know that "DBA" stands for "Default Blame Acceptor" #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You know what "the truth, thole truth and nothing but the truth, so help me Codd" means #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You can't talk fast enough to get a concept out of your head so you tweet it instead #youmightbeaDBA  buckwoody You cry when someone doesn't use a WHERE clause #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You think "Set theory" is not an verb but a noun #youmightbeaDBA  buckwoody You try to convince random strangers to vote on your Connect item #youmightbeaDBA  buckwoody You think 3 hours of contiguous sleep is a good thing #youmightbeaDBA or #youmightbeamother  buckwoody You don't like Oracle, and not just because of what she did to Neo #youmightbeaDBA  buckwoody You know when to say "sequel" and "s-q-l" #youmightbeaDBA  buckwoody You know where the data is #youmightbeaDBA  buckwoody You refer to your children as "Fully Redundant Mirrors" #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody Your laptop is more powerful than the servers in most companies - including your own #youmightbeaDBA  buckwoody You capitalize SELECTed words #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You cringe in public when the word "upgrade" is used in a sentence #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody All Data Is MetaData means something to you #youmightbeaDBA  buckwoody You've never seen the driveway to your house in the daylight #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody Most of your bloodstream is composed of caffeine #youmightbeaDBA  buckwoody Your task list is labeled "CRUD Matrix" #youmightbeaDBA  buckwoody You call your wife/husband a "Linked Server" #youmightbeaDBA  anonythemouse When someone tells you they are going to take a dump and you wonder of which database then #youmightbeaDBA  anonythemouse When it's 11pm on a holiday weekend and you are working #youmightbeaDBA  anonythemouse When you sit down at a table and look for it's primary key #youmightbeaDBA  anonythemouse When getting milk from the fridge you check the expiry date is > getdate() #youmightbeaDBA  blakmk when you wake up dreaming about sql #youmightbeaDBA  CharlesGarver You think a @buckwoody bobblehead would be a cool thing to have on the dashboard of your car #youmightbeaDBA  CharlesGarver Your friends don't understand why you think there's a difference between single and double quotes #youmightbeaDBA  CharlesGarver Even the newest employees know your name from all the downtime notices you've sent out #youmightbeaDBA  CharlesGarver You sometimes feel anxious and think "I should test restoring those backups" and then the feeling passes #youmightbeadba  CharlesGarver You know what a co-worker means when they ask "how is your squirrel server?" #youmightbeadba  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You're willing to move someone's job up in priority for a box of #voodoodonuts #youmightbeaDBA  CharlesGarver Each person in your company seems to think you work for THEM #youmightbeaDBA  CharlesGarver You have a Love/Hate relationship going on with #Microsoft #youmightbeaDBA  CharlesGarver People ask you to troubleshoot their Access program #youmightbeaDBA  CharlesGarver The first words you hear in the morning are 'your voicemail box is full' #youmightbeaDBA  CharlesGarver The thought of disrupting 500 people's work so you can do something doesn't phase you #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver Your home computer is backed up in 3 different places #youmightbeaDBA  CharlesGarver Your wardrobe for work includes pajamas #youmightbeaDBA  CharlesGarver Someone tells you to look in the INDEX and you look puzzled before finally going to the back of the book. #youmightbeaDBA  chuckboycejr If you have ever set up a SQLAgent job to email your mobile phone to serve as an alarm clock #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd wrestle a SysAdmin to the ground to implement #DPA best practices as per @aspiringgeek #youmightbeaDBA  databaseguy I need to be up in 7 hours, so I'm off to bed! I'll have to read the rest of @buckwoody's #youmightbeaDBA posts in the AM. (g'night Buck!)  databaseguy When people ask you about your house, the first thing you describe is the network. #youmightbeaDBA  databaseguy The last thing you say at the office each day is, "is anybody else here? I'm shutting off the lights!" #youmightbeaDBA  databaseguy Your blood pressure rises when you read application specs drafted by marketing. #youmightbeaDBA  databaseguy A good day at work is one when nobody pays you no mind. #youmightbeaDBA  databaseguy You care about latches and wait states. #youmightbeaDBA  databaseguy You have worked over 200 hours on a performance tuning project that required no application changes at all. #youmightbeaDBA  databaseguy The late-night security guard knows the names of your spouse and kids. #youmightbeaDBA  databaseguy You have had vigorous debates about whether it should be pronounced "sequel" or "ess-queue-ell". #youmightbeaDBA  databaseguy You have VPN and RDP software installed on your phone ... just in case. #youmightbeaDBA  databaseguy You have edited a data file by hand, just to see what would happen. #youmightbeaDBA  databaseguy You decorate your office walls with database catalog posters. #youmightbeaDBA  databaseguy You've built programs that access data just to keep other developers from asking you to run queries all the time. #youmightbeaDBA  databaseguy When you watch movies like The Matrix, you find yourself calculating the fasibility of storing all that data. #youmightbeaDBA  databaseguy You have tried to convince someone to spend money on an SSD storage array. #youmightbeaDBA  databaseguy When CPU is spiked on a server, you want to gather forensic evidence. #youmightbeaDBA  databaseguy You have to remind developers not to push code to production without checking if the database is ready. #youmightbeaDBA  databaseguy Nobody cares what you wear to work, as long as the thing keeps running. #youmightbeaDBA  databaseguy Telepathy is a job requirement when working with app dev teams. #youmightbeaDBA  databaseguy You read database statistics for the educational value. #youmightbeaDBA  databaseguy And your boss freely admits this to anyone within earshot. #youmightbeaDBA  databaseguy Your boss cannot explain or understand what you do. #youmightbeaDBA  databaseguy You envision ERDs when you see a GUI. #youmightbeaDBA  databaseguy You say things like "applications come and go, but data lasts forever." #youmightbeaDBA  databaseguy You have memorized the names of several of the AdventureWorks employees. #youmightbeaDBA  databaseguy You know what MAXDOP setting you can get away with for a big query based on current server load. #youmightbeaDBA  databaseguy And you immediately recognize the recursion in my last tweet. #youmightbeaDBA  databaseguy You find 50 simultaneous tweets from @buckwoody about #youmightbeaDBA :O)  DBAishness You have "funny stories" about the times your developers accidentally deleted the T-log in their test environment. #youmightbeaDBA  DBAishness Planning to slice and dice your MDW data with PowerPivot makes you giggle like a schoolgirl. #youmightbeaDBA  donalddotfarmer You think @buckwoody lives in the "real world." #youmightbeaDBA  jamach09 @buckwoody #youmightbeaDBA Why go outside when you can sit in the nice cool server room?  jamach09 If you refer to procreation as "Replication", #youmightbeaDBA.  jamach09 If you think ORM is a four-letter word, #youmightbeaDBA  JamesMarsh If you have ever preached the value of Source Code Control, #YouMightBeADBA  jethrocarr @venzann You store your shopping list in a ACID compliant DB #youmightbeaDBA  joe_positive @buckwoody thought it stood for "Don't Bother Asking" #youmightbeaDBA  joe_positive when you check your IT Events Calendar before making weekend plans #youmightbeaDBA  LadyRuna You cringe whenever someone calls Excel a database #youmightbeaDBA  LadyRuna When the waiter says he'll be your server today, you ask how many terabytes he is #youmightbeaDBA  LadyRuna you always call the asterisk a "Star" #youmightbeaDBA  LadyRuna You walk into a server room, say "Nice RACK!" and everyone there knows you're talking about server rack... #youmightbeaDBA  LadyRuna You receive more messages from servers than from friends #youmightbeaDBA  LadyRuna hmmm... #youmightbeaDBA if your recipe for gumbo is "SELECT * FROM Refrigerator"  markjholmes @SQLSoldier Heh. #youmightbeaDBA if you correct other DBAs' spelling of @PaulRandal  markjholmes #youmightbeaDBA if you actually test RAID5 vs RAID10 on your SAN because when it comes to configuration, "it depends."  markjholmes #youmightbeaDBA if you have at least 3 definitions of the word "cluster"  MarlonRibunal 3 Words: @BrentO, snicker, & Access #youmightbeaDBA  MarlonRibunal @onpnt @mikeSQL my appeal was a couple of mins late. Enjoying #youmightbeaDBA  MarlonRibunal @mikeSQL @onpnt pls, don't mention bacon #youmightbeaDBA  merv @buckwoody You HATE 3-way joins #youmightbeaDBA  MidnightDBA If you're up at midnight Tweeting about SQL #youmightbeaDBA  MidnightDBA @buckwoody I'd noticed that. :) #youmightbeaDBA  mikeSQL when people talk about "their type" you're thinking varchar, bigint, binary, etc #youmightbeadba  mikeSQL people ask you to go to lunch , but you can't go because you're attending #SQLlunch #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mrdenny If you think that @buckwoody's demo using PowerPivot to analyze index usage data from DMVs is awesome then #youmightbeaDBA  mrdenny You wish @PaulRandal still worked at Microsoft so that they would make a bobble head of him #youmightbeadba  mrdenny When it's 11pm on a holiday weekend, and your posting stupid jokes on Twitter then #youmightbeadba  mrdenny If you go out with friends and wonder why no one's wearing a kilt then #YouMightBeADBA  mrdenny You can't do basic math, but you know off the top of your head how many CALs $14,412 can buy you. #YoumightbeaDBA  mrdenny If you've ever setup a SQL Job to email you to get you out of a regularly scheduled meeting #YouMightBeADBA.  mrdenny You throw up in your mouth a little when ever you here the word "Access". Even if it doesn't relate to a MS product. #YouMightBeADBA  msdtjones You spend more time listening to @buckwoody than your wife #youmightbeaDBA  NFDotCom You perform "hail deltas" on a regular basis. #YouMightBeADBA  NoelMcKinney If you tell your wife you want to go to Columbus Ohio for your wedding anniversary so you can attend #sqlsat42 then #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney Someone asks you to throw another log on the fire and you tell them not to worry about it because Autogrowth is turned on #youmightbeaDBA  Nuurdygirl Even if you have a girlfriend...its possible #youmightbeadba. Yeah-i said its possible!  Nuurdygirl When your girlfriend has to lean around the laptop to kiss you goodnight #youmightbeadba  Old_Man_Fish If you worry about how big your package is and how long it takes to finish #youmightbeaDBA  Old_Man_Fish If you no longer wonder if someone is in trouble or died if you are getting calls at 2AM #youmightbeaDBA  Old_Man_Fish If, when you hear the word ACCESS with no connotation you blood pressure jumps 50 points, #youmightbeaDBA  onpnt When you hear the word inject you immediately get concerned if your databases are OK #youmightbeaDBA  onpnt Your servers haven't been rebooted in a year #youmightbeaDBA  onpnt You know why it's funny when @PaulRandal has the word, "Sheep" in a tweet #youmightbeaDBA  onpnt You have read BOL without actually having a problem to figure out #youmightbeaDBA  onpnt You can type "SELECT columns FROM tables" without typos but tipen ni Banglish ares a messis #youmightbeaDBA  onpnt DR strategies doesn't include the word, RAID in them #youmightbeaDBA  onpnt you can move a SQL Server instance to a new server without the users ever knowing #youmightbeaDBA  onpnt You have made an SSIS package that is more than one step #youmightbeaDBA  onpnt You have the balls to say no to your boss when they ask for the sa password #youmightbeaDBA  onpnt you google to trouble shoot a problem and end up at your own blog (and it fixes it) #youmightbeaDBA  onpnt You talk your wife into moving the family vacation a week earlier so you can attend the areas local SSUG meeting #youmightbeaDBA  onpnt you can explain to a nontechnical person what a deadlock is #youmightbeaDBA  onpnt You hope a girl asks you what your collation is #youmightbeaDBA  onpnt you make jokes that include the words shrink, truncate and 1205. And you are the only one that laughs at them #youmightbeaDBA  onpnt You rate your ability to stay awake to work longer on blogs, twitter, forums and your day to day job with the 5 9's goal #youmightbeaDBA  onpnt you have major surgery and beg the doctor to release you back to work 5 days later because you miss your servers #youmightbeaDBA #TrueStory  onpnt You do have backups and you know how to use them #youmightbeaDBA  onpnt It's the network #youmightbeaDBA  onpnt When the developers get to work your mood changes rapidly #youmightbeaDBA  onpnt When someone says, "PASS", you first think of karaoke #youmightbeaDBA  onpnt Recruiters try to get you to call them *just* because they think you'll give them @BrentO contact info #youmightbeaDBA  onpnt You chuckle every time you go to grab the "CLR" Calcium, Lime and Rust Remover to clean something #youmightbeaDBA  onpnt @MarlonRibunal @mikeSQL Sorry man, it was already in motion ;-) #youmightbeaDBA  onpnt When you have an "I love bacon" sticker on your laptop. #youmightbeaDBA http://twitpic.com/1ry671  onpnt You sing SELECT statements in the shower #youmightbeaDBA  onpnt When you see a chicken it doesn't remind you of food. It reminds you of a guy named Jorge #youmightbeaDBA  onpnt At time, SQL is your mistress #youmightbeaDBA  onpnt Your wife wonders if SQL is the code name of your mistress at times #youmightbeaDBA  onpnt it's Friday and you are on twitter thinking really hard about what would be funny for hash tag #youmightbeaDBA  onpnt You organize your wife's "decorative"pillows on the bed in a B-Tree structure #youmightbeaDBA  PaulWhiteNZ If you: SELECT TOP (1) milk FROM fridge WHERE use_by_date >= GET_DATE() ORDER BY use_by_date ASC #YouMightBeaDBA  RonDBA #youmightbeaDBA if you read @buckwoody's and @BrentO's blogs.  ryaneastabrook @buckwoody omg, you have to stand up a website with these on them, they are awesome #youmightbeaDBA  soulvy @StrateSQL @LadyRuna Or a "Splat" #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer You retweet @buckwoody on a Friday night #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer Developers make you twitch #youmightbeaDBA  sqlagentman You know what X/1024*8 is. #YouMightBeADBA  SqlAsylum Your still in the office at 5:00 on memorial day weekend. #youmightbeadba :)  SQLBob Whenever someone you know gets pregnant you bring up INNER JOINs or SQL Injection attacks... #youmightbeaDBA  SQLChicken You know one or more SQL folks in the community with an animal in their username #youmightbeaDBA  SQLChicken You've used one or more car analogies to explain how a database works #youmightbeaDBA  SQLChicken “@sqljoe: #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you” lmao nice!  SQLChicken When talking about SSIS your discussions break down into various jokes about packages #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman You really own a "Stick of Much Developer Whacking" #YouMightBeADBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  sqljoe #youmightbeaDBA if you wished your wife knew T-sql. USE ShoppingList SELECT NecessaryItems from Supermarket WHERE Category<> ("junk food")  sqljoe #youmightbeaDBA if the first thing you kiss when you wake up is your mobile for not waking you up in the middle of the night  sqljoe #youmightbeaDBA if your wife has a "Do Not Fly" family vacation list of her own including your laptop and mobile  sqljoe #youmightbeaDBA if you have researched for DBA Anonymous groups and attended a #SSUG willing to drop your database (vice)  sqljoe #youmightbeaDBA if your only maintenance windows are staff meetings  sqljoe #youmightbeaDBA if you think of yourself as "The One" in The Matrix "balancing the equation" from The Architect's (developers) poor coding  sqljoe #youmightbeaDBA if you think @PaulRandal should have played the Oracle in The Matrix  sqljoe #youmightbeaDBA if home CD & Movie collection is stored in secured containers,in logical order & naming convention,and with a backup copy  sqljoe #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you  sqljoe #youmightbeaDBA if you have tried to TiVo @MidnightDBA broadcasts  sqljoe #youmightbeaDBA if your #sql user group feels like #AA meetings  sqljoe #youmightbeaDBA if you thought of bringing your #sql books to #sqlsaturday and #sqlpass for autographs  sqljoe #youmightbeaDBA if #sqlpass feels like the #oscars  sqljoe #youmightbeaDBA if you are proud of your small package  SQLLawman #youmightbeaDBA when you hear MDX and Acura is not first thought that comes to mind.  sqlrunner If your wife double checks that there isn't a SQLSat within 200 miles of your vacation destination #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner You treat the word 'access' as a verb, not a noun #youmightbeaDBA  sqlrunner If you are happy with sub-second performance #youmightbeaDBA  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner Your company set's up international phone coverage for your cruise #youmightbeaDBA  sqlsamson @buckwoody if your manager asks you for data and you respond with "there's a script for that" #youmightbeadba  sqlsamson @buckwoody If you receive more messages from your server then your spouse #youmightbeadba  SQLSoldier You've spent all night Valentines Day upgrading the SQL Servers and forgot to tell your wife you'd be working late. #youmightbeadba  SQLSoldier You're flattered when someone calls you a geek. #youmightbeadba  SQLSoldier @llangit @mrdenny it's 11pm on a holiday weekend, & your reading stupid jokes on Twitter then #youmightbeadba  SQLSoldier Your manager borrows lunch money from you because your salary is 30% higher than his. #youmightbeaDBA  SQLSoldier You think "intellisense" is a double negative because it's not intelligent nor makes sense. #youmightbeaDBA  SQLSoldier 75% of the emails you receive at home have the phrase "now following you on Twitter!" in the subject line. #youmightbeaDBA  SQLSoldier You petition Ken Burns to remake Office Space because it should have been 18 hours long. #youmightbeaDBA  SQLSoldier You select a candidate for a Jr DBA position because his resume said he's willing to get your coffee. #youmightbeaDBA  SQLSoldier Somebody misquotes @PaulRandall and you call him on your cell to verify. #youmightbeaDBA  SQLSoldier You wish the elevator in your building was slower because it's the last time you'll be left alone all day. #youmightbeaDBA  SQLSoldier The developers sacrifice small animals before giving you their code for review. #youmightbeaDBA  SQLSoldier Developers bring you coffee and a BLT when you review their code. #youmightbeaDBA #IWish  SQLSoldier You can get out of any family get-together by saying you have to work and nobody questions it. #youmightbeaDBA  SQLSoldier You've requested a HP Superdome for you "test" box. #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier A dev. asks if you've heard about some great new feature in SQL and you show the 16 blog posts you wrote on it ... last year #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA #TrueStory  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA  SQLSoldier You use a cell phone service coverage map to plan your next vacation. #youmightbeaDBA  SQLSoldier You come in to work at 7 AM because it gives you at least 3 hours without any developers around. #youmightbeaDBA  SQLSoldier You figure out a way to make take your wife on a cruise and deduct it as a business expense. #youmightbeaDBA #sqlcruise  SQLSoldier You name your cat SQLDog because the name @SQLCat was already taken. #youmightbeaDBA  SQLSoldier You rate your blog posts based on the number of retweets you get. #youmightbeaDBA  SQLSoldier You disable random logins just to mess with people. #youmightbeaDBA  SQLSoldier You fall for the pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your think the 4 food groups are coffee, bacon, fast food, and Mountain Dew. #youmightbeaDBA  SQLSoldier You tell someone your job title and they ask "What?" You describe it and they ask "What?". So you say "computer geek". #youmightbeaDBA  SQLSoldier The #1 referrer to your blog is Twitter.com. #youmightbeaDBA  SQLSoldier Your idea of a good time on a Saturday involves free training. #youmightbeaDBA #sqlsat43  SQLSoldier You write a book that all of your co-workers have and none have read it. #youmightbeaDBA  SQLSoldier You write a book that sells a couple thousand copies and is heralded a best seller. #youmightbeaDBA  SQLSoldier No matter how sick you are, you go to work if it's time to pass the pager on to the next guy. #youmightbeaDBA #TrueStory  SQLSoldier You go out on the town, and strangers walk up to you and say, "Hey you're that SQL guy" #youmightbeaDBA #TrueStory  SQLSoldier Your wife asks you to fix something, and you request a downtime window. #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier Your best pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You list TSQL as your native language on the 2010 census. #youmightbeaDBA  SQLSoldier Starbucks' stock price drops every time you go on vacation. #youmightbeaDBA  SQLSoldier You're happy when the web master says that the website is down. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You get mad when someone calls your car a "heap" because you've always considered it to be a "clustered index". #youmightbeaDBA  SQLSoldier Your blog has more hits than your company's website. #youmightbeaDBA  SQLSoldier You systematically remove the asterisk key from all keyboards in the company except yours. #youmightbeaDBA  SQLSoldier When asked if you recycle, you reply that you run sp_cycle_errorlog every night at midnight #youmightbeaDBA  SQLSoldier You wouldn't allow someone named @AdamMachanic to work on your car. #youmightbeaDBA  SQLSoldier You switch offices every 3 days to avoid developers #youmightbeaDBA  SQLSoldier PSS has your number on speed dial. #youmightbeaDBA  SQLSoldier You frown when you they tell Neo that he's going to the Oracle #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  Tarwn A pleasurable saturday means spending the day learning more about what you already do the rest of the week #youmightbeaDBA ...oh, wait...  thelostforum For great justice; all our base are belong to YOU !! #youmightbeadba  thelostforum @SQLSoldier: You need a witness to use a mirror #youmightbeaDBA ;)  TimCost you capitalize key words. always. everywhere. you can't help it, usually don't even notice. #youmightbeaDBA  Toshana Your the only one in your company not impressed with the developers new application. #youmightbeaDBA  venzann Coming soon from a (respected) book publisher - @buckwoody's #youmightbeaDBA  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann I love the #youmightbeaDBA tag. Found at least 6 new DBAs to follow..  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann You use #sqlhelp as a primary resource during troubleshooting #youmightbeaDBA  venzann You insist on stricter password security for your sql servers than you implement on your own laptop #youmightbeaDBA  WesBrownSQL @buckwoody you are up so late the only tweets you see are from @buckwoody #youmightbeaDBA  WesBrownSQL @SQLSoldier you are upgrading all your 2005 prod servers to 2008 R2 on a three day weekend... #youmightbeaDBA  zippy1981 #youmightbeaDBA if everytime you do something with #mongodb you think of the Vulcan proverb "only Nixon could go to China."  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World!

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. Hello World! Of course no first post would be allowed if it didnt focus on the hello world scenario.  The iPhone SDK follows that tradition with the Your First iPhone Application walkthrough.  I will say that the developer documentation for iPhone is pretty good.  There are plenty of walkthoughs and they break things down into nicely sized steps and do a good job of bringing the user along.  As expected, this application is quite simple.  It comprises of a text box, a label, and a button.  When you push the button, the label changes to Hello plus the  word you typed into the text box.  Makes perfect sense for a starter application.  Theres not much to this but it covers a few basic elements: Laying out basic UI Handling user input Hooking up events Formatting text     So, lets get started building a similar app for Windows Phone 7 Series! Implementing the UI: UI in Silverlight (and therefore Windows Phone 7) is defined in XAML, which is a declarative XML language also used by WPF on the desktop.  For anyone thats familiar with similar types of markup, its relatively straightforward to learn, but has a lot of power in it once you get it figured out.  Well talk more about that. This UI is very simple.  When I look at this, I note a couple of things: Elements are arranged vertically They are all centered So, lets create our Application and then start with the UI.  Once you have the the VS 2010 Express for Windows Phone tool running, create a new Windows Phone Project, and call it Hello World: Once created, youll see the designer on one side and your XAML on the other: Now, we can create our UI in one of three ways: Use the designer in Visual Studio to drag and drop the components Use the designer in Expression Blend 4 to drag and drop the components Enter the XAML by hand in either of the above Well start with (1), then kind of move to (3) just for instructional value. To develop this UI in the designer: First, delete all of the markup between inside of the Grid element (LayoutRoot).  You should be left with just this XAML for your MainPage.xaml (i shortened all the xmlns declarations below for brevity): 1: <phoneNavigation:PhoneApplicationPage 2: x:Class="HelloWorld.MainPage" 3: xmlns="...[snip]" 4: FontFamily="{StaticResource PhoneFontFamilyNormal}" 5: FontSize="{StaticResource PhoneFontSizeNormal}" 6: Foreground="{StaticResource PhoneForegroundBrush}"> 7:   8: <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> 9:   10: </Grid> 11:   12: </phoneNavigation:PhoneApplicationPage> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Well be adding XAML at line 9, so thats the important part. Now, Click on the center area of the phone surface Open the Toolbox and double click StackPanel Double click TextBox Double click TextBlock Double click Button That will create the necessary UI elements but they wont be arranged quite right.  Well fix it in a second.    Heres the XAML that we end up with: 1: <StackPanel Height="100" HorizontalAlignment="Left" Margin="10,10,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> 2: <TextBox Height="32" Name="textBox1" Text="TextBox" Width="100" /> 3: <TextBlock Height="23" Name="textBlock1" Text="TextBlock" /> 4: <Button Content="Button" Height="70" Name="button1" Width="160" /> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The designer does its best at guessing what we want, but in this case we want things to be a bit simpler. So well just clean it up a bit.  We want the items to be centered and we want them to have a little bit of a margin on either side, so heres what we end up with.  Ive also made it match the values and style from the iPhone app: 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <TextBlock Name="textBlock1" HorizontalAlignment="Center" Margin="0,100,0,0" Text="Hello You!" /> 4: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello"/> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now lets take a look at what weve done there. Line 1: We removed all of the formatting from the StackPanel, except for Margin, as thats all we need.  Since our parent element is a Grid, by default the StackPanel will be sized to fit in that space.  The Margin says that we want to reserve 10 pixels on each side of the StackPanel. Line 2: Weve set the HorizontalAlignment of the TextBox to Stretch, which says that it should fill its parents size horizontally.  We want to do this so the TextBox is always full-width.  We also set TextAlignment to Center, to center the text. Line 3: In contrast to the TextBox above, we dont care how wide the TextBlock is, just so long as it is big enough for its text.  Thatll happen automatically, so we just set its Horizontal alignment to Center.  We also set a Margin above the TextBlock of 100 pixels to bump it down a bit, per the iPhone UI. Line 4: We do the same things here as in Line 3. Heres how the UI looks in the designer: Believe it or not, were almost done! Implementing the App Logic Now, we want the TextBlock to change its text when the Button is clicked.  In the designer, double click the Button to be taken to the Event Handler for the Buttons Click event.  In that event handler, we take the Text property from the TextBox, and format it into a string, then set it into the TextBlock.  Thats it! 1: private void button1_Click(object sender, RoutedEventArgs e) 2: { 3: string name = textBox1.Text; 4:   5: // if there isn't a name set, just use "World" 6: if (String.IsNullOrEmpty(name)) 7: { 8: name = "World"; 9: } 10:   11: // set the value into the TextBlock 12: textBlock1.Text = String.Format("Hello {0}!", name); 13:   14: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We use the String.Format() method to handle the formatting for us.    Now all thats left is to test the app in the Windows Phone Emulator and verify it does what we think it does! And it does! Comparing against the iPhone Looking at the iPhone example, there are basically three things that you have to touch as the developer: 1) The UI in the Nib file 2) The app delegate 3) The view controller Counting lines is a bit tricky here, but to try to keep this even, Im going to only count lines of code that I could not have (or would not have) generated with the tooling.  Meaning, Im not counting XAML and Im not counting operations that happen in the Nib file with the XCode designer tool.  So in the case of the above, even though I modified the XAML, I could have done all of those operations using the visual designer tool.  And normally I would have, but the XAML is more instructive (and less steps!).  Im interested in things that I, as the developer have to figure out in code.  Im also not counting lines that just have a curly brace on them, or lines that are generated for me (e.g. method names that are generated for me when I make a connection, etc.) So, by that count, heres what I get from the code listing for the iPhone app found here: HelloWorldAppDelegate.h: 6 HelloWorldAppDelegate.m: 12 MyViewController.h: 8 MyViewController.m: 18 Which gives me a grand total of about 44 lines of code on iPhone.  I really do recommend looking at the iPhone code for a comparison to the above. Now, for the Windows Phone 7 Series application, the only code I typed was in the event handler above Main.Xaml.cs: 4 So a total of 4 lines of code on Windows Phone 7.  And more importantly, the process is just A LOT simpler.  For example, I was surprised that the User Interface Designer in XCode doesnt automatically create instance variables for me and wire them up to the corresponding elements.  I assumed I wouldnt have to write this code myself (and risk getting it wrong!).  I dont need to worry about view controllers or anything.  I just write my code.  This blog post up to this point has covered almost every aspect of this apps development in a few pages.  The iPhone tutorial has 5 top level steps with 2-3 sub sections of each. Now, its worth pointing out that the iPhone development model uses the Model View Controller (MVC) pattern, which is a very flexible and powerful pattern that enforces proper separation of concerns.  But its fairly complex and difficult to understand when you first walk up to it.  Here at Microsoft weve dabbled in MVC a bit, with frameworks like MFC on Visual C++ and with the ASP.NET MVC framework now.  Both are very powerful frameworks.  But one of the reasons weve stayed away from MVC with client UI frameworks is that its difficult to tool.  We havent seen the type of value that beats double click, write code! for the broad set of scenarios. Another thing to think about is how many of those lines of code were focused on my apps functionality?.  Or, the converse of How many lines of code were boilerplate plumbing?  In both examples, the actual number of functional code lines is similar.  I count most of them in MyViewController.m, in the changeGreeting method.  Its about 7 lines of code that do the work of taking the value from the TextBox and putting it into the label.  Versus 4 on the Windows Phone 7 side.  But, unfortunately, on iPhone I still have to write that other 37 lines of code, just to get there. 10% of the code, 1 file instead of 4, its just much simpler. Making Some Tweaks It turns out, I can actually do this application with ZERO  lines of code, if Im willing to change the spec a bit. The data binding functionality in Silverlight is incredibly powerful.  And what I can do is databind the TextBoxs value directly to the TextBlock.  Take some time looking at this XAML below.  Youll see that I have added another nested StackPanel and two more TextBlocks.  Why?  Because thats how I build that string, and the nested StackPanel will lay things out Horizontally for me, as specified by the Orientation property. 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="0,100,0,0" > 4: <TextBlock Text="Hello " /> 5: <TextBlock Name="textBlock1" Text="{Binding ElementName=textBox1, Path=Text}" /> 6: <TextBlock Text="!" /> 7: </StackPanel> 8: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello" Click="button1_Click" /> 9: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, the real action is there in the bolded TextBlock.Text property: Text="{Binding ElementName=textBox1, Path=Text}" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That does all the heavy lifting.  It sets up a databinding between the TextBox.Text property on textBox1 and the TextBlock.Text property on textBlock1. As I change the text of the TextBox, the label updates automatically. In fact, I dont even need the button any more, so I could get rid of that altogether.  And no button means no event handler.  No event handler means no C# code at all.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • What is consuming memory?

    - by Milan Babuškov
    Feel free to change the question title. I have a VPS with standard LAMP stack and a busy website. Operating system is CentOS 5.5. Virtualization is done with VMWare. My server gets real slow about every 6 hours. Logging into it I see that 1.6GB of RAM is consumed. However, listing the active processes does not explain the problem. Can anyone make any sense of this? "free" shows this: total used free shared buffers cached Mem: 2059456 2049280 10176 0 14780 380968 -/+ buffers/cache: 1653532 405924 Swap: 2096472 96 2096376 While this is output of "ps": [root@vmi29 /]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 10348 688 ? Rs Jun05 0:01 init [3] root 2 0.0 0.0 0 0 ? S< Jun05 0:00 [migration/0] root 3 0.0 0.0 0 0 ? SN Jun05 0:00 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S< Jun05 0:00 [migration/1] root 5 0.0 0.0 0 0 ? SN Jun05 0:00 [ksoftirqd/1] root 6 0.0 0.0 0 0 ? S< Jun05 0:00 [migration/2] root 7 0.0 0.0 0 0 ? SN Jun05 0:00 [ksoftirqd/2] root 8 0.0 0.0 0 0 ? S< Jun05 0:00 [migration/3] root 9 0.0 0.0 0 0 ? SN Jun05 0:00 [ksoftirqd/3] root 10 0.0 0.0 0 0 ? S< Jun05 0:06 [events/0] root 11 0.0 0.0 0 0 ? S< Jun05 0:00 [events/1] root 12 0.0 0.0 0 0 ? S< Jun05 0:00 [events/2] root 13 0.0 0.0 0 0 ? S< Jun05 0:00 [events/3] root 14 0.0 0.0 0 0 ? S< Jun05 0:00 [khelper] root 31 0.0 0.0 0 0 ? S< Jun05 0:00 [kthread] root 38 0.0 0.0 0 0 ? S< Jun05 0:00 [kblockd/0] root 39 0.0 0.0 0 0 ? S< Jun05 0:00 [kblockd/1] root 40 0.0 0.0 0 0 ? S< Jun05 0:00 [kblockd/2] root 41 0.0 0.0 0 0 ? S< Jun05 0:00 [kblockd/3] root 42 0.0 0.0 0 0 ? S< Jun05 0:00 [kacpid] root 204 0.0 0.0 0 0 ? S< Jun05 0:00 [cqueue/0] root 205 0.0 0.0 0 0 ? S< Jun05 0:00 [cqueue/1] root 206 0.0 0.0 0 0 ? S< Jun05 0:00 [cqueue/2] root 207 0.0 0.0 0 0 ? S< Jun05 0:00 [cqueue/3] root 210 0.0 0.0 0 0 ? S< Jun05 0:00 [khubd] root 212 0.0 0.0 0 0 ? S< Jun05 0:00 [kseriod] root 302 0.0 0.0 0 0 ? S Jun05 0:00 [khungtaskd] root 303 0.0 0.0 0 0 ? S Jun05 0:00 [pdflush] root 304 0.0 0.0 0 0 ? S Jun05 0:01 [pdflush] root 305 0.0 0.0 0 0 ? S< Jun05 0:05 [kswapd0] root 306 0.0 0.0 0 0 ? S< Jun05 0:00 [aio/0] root 307 0.0 0.0 0 0 ? S< Jun05 0:00 [aio/1] root 308 0.0 0.0 0 0 ? S< Jun05 0:00 [aio/2] root 309 0.0 0.0 0 0 ? S< Jun05 0:00 [aio/3] root 515 0.0 0.0 0 0 ? S< Jun05 0:00 [kpsmoused] root 582 0.0 0.0 0 0 ? S< Jun05 0:00 [mpt_poll_0] root 583 0.0 0.0 0 0 ? S< Jun05 0:00 [mpt/0] root 584 0.0 0.0 0 0 ? S< Jun05 0:00 [scsi_eh_0] root 590 0.0 0.0 0 0 ? S< Jun05 0:00 [ata/0] root 591 0.0 0.0 0 0 ? S< Jun05 0:00 [ata/1] root 592 0.0 0.0 0 0 ? S< Jun05 0:00 [ata/2] root 593 0.0 0.0 0 0 ? S< Jun05 0:00 [ata/3] root 594 0.0 0.0 0 0 ? S< Jun05 0:00 [ata_aux] root 610 0.0 0.0 0 0 ? S< Jun05 0:00 [kstriped] root 631 0.0 0.0 0 0 ? S< Jun05 0:05 [kjournald] root 656 0.0 0.0 0 0 ? S< Jun05 0:00 [kauditd] root 689 0.0 0.0 13364 928 ? S<s Jun05 0:00 /sbin/udevd -d root 2123 0.0 0.0 0 0 ? S< Jun05 0:00 [kmpathd/0] root 2124 0.0 0.0 0 0 ? S< Jun05 0:00 [kmpathd/1] root 2126 0.0 0.0 0 0 ? S< Jun05 0:00 [kmpathd/2] root 2127 0.0 0.0 0 0 ? S< Jun05 0:00 [kmpathd/3] root 2128 0.0 0.0 0 0 ? S< Jun05 0:00 [kmpath_handlerd] root 2203 0.0 0.0 0 0 ? S< Jun05 0:00 [kjournald] root 2613 0.0 0.0 5908 648 ? Ss Jun05 0:00 syslogd -m 0 root 2617 0.0 0.0 3804 424 ? Ss Jun05 0:00 klogd -x root 2707 0.0 0.0 10760 372 ? Ss Jun05 0:02 irqbalance apache 2910 0.5 0.6 213964 12912 ? S 00:22 0:07 /usr/sbin/httpd dbus 3011 0.0 0.0 21256 904 ? Ss Jun05 0:00 dbus-daemon --system root 3025 0.0 0.0 3800 576 ? Ss Jun05 0:00 /usr/sbin/acpid 68 3038 0.0 0.2 31152 4336 ? Ss Jun05 0:01 hald root 3039 0.0 0.0 21692 1176 ? S Jun05 0:00 hald-runner 68 3046 0.0 0.0 12324 856 ? S Jun05 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.s 68 3052 0.0 0.0 12324 856 ? S Jun05 0:00 hald-addon-keyboard: listening on /dev/input/event0 root 3105 0.0 0.0 62624 1212 ? Ss Jun05 0:00 /usr/sbin/sshd root 3264 0.0 0.0 74820 1156 ? Ss Jun05 0:00 crond root 3292 0.0 0.0 18416 472 ? S Jun05 0:00 /usr/sbin/smartd -q never root 3300 0.0 0.0 3792 480 tty2 Ss+ Jun05 0:00 /sbin/mingetty tty2 root 3301 0.0 0.0 3792 480 tty3 Ss+ Jun05 0:00 /sbin/mingetty tty3 root 3302 0.0 0.0 3792 484 tty4 Ss+ Jun05 0:00 /sbin/mingetty tty4 root 3304 0.0 0.0 3792 480 tty5 Ss+ Jun05 0:00 /sbin/mingetty tty5 root 3306 0.0 0.0 3792 480 tty6 Ss+ Jun05 0:00 /sbin/mingetty tty6 apache 5158 0.4 0.5 211896 11848 ? S 00:28 0:04 /usr/sbin/httpd apache 5519 0.4 0.5 211896 11992 ? S 00:29 0:03 /usr/sbin/httpd root 5649 0.0 0.0 63848 1184 pts/0 S Jun05 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --soc mysql 5696 2.1 1.9 411060 40392 pts/0 Rl Jun05 2:01 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql apache 5943 0.4 0.5 211896 12000 ? S 00:30 0:03 /usr/sbin/httpd apache 5976 0.6 0.5 211896 11792 ? S 00:30 0:04 /usr/sbin/httpd apache 6073 0.4 0.5 211896 11208 ? S 00:31 0:03 /usr/sbin/httpd apache 6122 0.4 0.5 211896 11848 ? S 00:31 0:03 /usr/sbin/httpd apache 6128 0.3 0.5 211896 11940 ? S 00:31 0:02 /usr/sbin/httpd apache 6159 0.5 0.5 211896 11872 ? S 00:31 0:04 /usr/sbin/httpd apache 6636 0.4 0.6 213960 13444 ? S 00:32 0:02 /usr/sbin/httpd apache 6787 0.3 0.5 211884 11308 ? S 00:33 0:02 /usr/sbin/httpd apache 6796 0.4 0.5 211884 12024 ? S 00:33 0:02 /usr/sbin/httpd apache 6801 0.3 0.5 211896 11920 ? S 00:33 0:01 /usr/sbin/httpd apache 6804 0.4 0.5 211884 11848 ? S 00:33 0:02 /usr/sbin/httpd apache 6825 0.4 0.5 211896 11972 ? S 00:33 0:02 /usr/sbin/httpd apache 6866 0.3 0.5 210860 11044 ? S 00:33 0:01 /usr/sbin/httpd apache 6870 0.2 0.5 211896 11108 ? S 00:33 0:01 /usr/sbin/httpd apache 6872 0.3 0.5 211896 11900 ? S 00:33 0:01 /usr/sbin/httpd apache 6993 0.3 0.5 211896 11836 ? S 00:33 0:02 /usr/sbin/httpd apache 6994 0.3 0.5 211896 11792 ? S 00:33 0:01 /usr/sbin/httpd apache 7136 0.2 0.5 211896 11432 ? S 00:34 0:01 /usr/sbin/httpd apache 7143 0.2 0.5 210860 11052 ? S 00:34 0:01 /usr/sbin/httpd apache 7145 0.2 0.5 211896 11136 ? S 00:34 0:01 /usr/sbin/httpd apache 7266 0.2 0.6 213952 12748 ? S 00:34 0:01 /usr/sbin/httpd apache 7299 0.2 0.5 211884 11276 ? S 00:34 0:01 /usr/sbin/httpd apache 7311 0.2 0.5 211884 11300 ? S 00:34 0:01 /usr/sbin/httpd apache 7313 0.3 0.5 211884 11876 ? S 00:34 0:01 /usr/sbin/httpd apache 7345 0.2 0.5 210872 11100 ? S 00:34 0:01 /usr/sbin/httpd apache 7349 0.2 0.5 210860 11008 ? S 00:34 0:01 /usr/sbin/httpd apache 7350 0.2 0.5 211896 11832 ? S 00:34 0:01 /usr/sbin/httpd apache 7351 0.1 0.5 211884 11072 ? S 00:34 0:00 /usr/sbin/httpd apache 7352 0.2 0.5 210872 11096 ? S 00:34 0:01 /usr/sbin/httpd apache 7449 0.1 0.5 210860 11020 ? S 00:35 0:00 /usr/sbin/httpd root 7490 0.3 0.0 0 0 ? S Jun05 3:11 [vmmemctl] root 7597 0.0 0.0 72656 1260 ? Ss Jun05 0:06 /usr/lib/vmware-tools/sbin64/vmware-guestd --background /va apache 7720 0.1 0.5 210860 10748 ? S 00:36 0:00 /usr/sbin/httpd apache 7726 0.1 0.4 209836 9304 ? R 00:36 0:00 /usr/sbin/httpd apache 7727 0.1 0.5 210860 10916 ? S 00:36 0:00 /usr/sbin/httpd apache 7731 0.1 0.5 210860 10780 ? S 00:36 0:00 /usr/sbin/httpd apache 7732 0.3 0.5 210860 10916 ? S 00:36 0:01 /usr/sbin/httpd apache 7733 0.1 0.5 210872 11000 ? S 00:36 0:00 /usr/sbin/httpd apache 7735 0.1 0.5 211884 11048 ? S 00:36 0:00 /usr/sbin/httpd apache 7761 0.1 0.5 210860 10552 ? S 00:36 0:00 /usr/sbin/httpd apache 7776 0.1 0.4 209836 8648 ? R 00:37 0:00 /usr/sbin/httpd apache 7790 0.2 0.3 208812 7724 ? R 00:40 0:00 /usr/sbin/httpd apache 7800 0.2 0.3 208812 8088 ? R 00:40 0:00 /usr/sbin/httpd root 7801 0.0 0.0 3792 484 tty1 Ss+ 00:41 0:00 /sbin/mingetty tty1 apache 7820 0.2 0.3 208812 7552 ? R 00:41 0:00 /usr/sbin/httpd apache 7834 0.2 0.3 207788 6756 ? R 00:42 0:00 /usr/sbin/httpd apache 7864 0.2 0.2 207788 6148 ? R 00:42 0:00 /usr/sbin/httpd apache 7872 0.3 0.2 207788 5856 ? R 00:43 0:00 /usr/sbin/httpd apache 7874 2.5 0.3 207788 6336 ? R 00:43 0:00 /usr/sbin/httpd root 7875 0.3 0.0 63844 1056 ? S 00:43 0:00 sh -c lsb_release -sd 2>/dev/null root 7879 1.6 0.0 65604 964 pts/0 R+ 00:43 0:00 ps aux root 16316 0.0 0.1 90128 3272 ? Ss Jun05 0:00 sshd: milanb [priv] milanb 16358 0.0 0.0 90128 1752 ? S Jun05 0:00 sshd: milanb@pts/0 milanb 16360 0.0 0.0 66076 1480 pts/0 Ss Jun05 0:00 -bash root 16875 0.0 0.0 101068 1324 pts/0 S Jun05 0:00 su - root 16877 0.0 0.0 66184 1692 pts/0 S Jun05 0:00 -bash root 24373 0.0 0.3 206764 7348 ? Rs Jun05 0:01 /usr/sbin/httpd

    Read the article

  • How to reduce RAM consumption when my server is idle

    - by Julien Genestoux
    We use Slicehost, with 512MB instances. We run Ubuntu 9.10 on them. I installed a few packages, and I'm now trying to optimize RAM consumption before running anything on there. A simple ps gives me the list of running processes : # ps faux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S< Jan04 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< Jan04 0:15 \_ [migration/0] root 4 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/0] root 6 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [events/0] root 7 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [cpuset] root 8 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [khelper] root 9 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [async/mgr] root 10 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenwatch] root 11 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenbus] root 13 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/1] root 14 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/1] root 15 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/1] root 16 0.0 0.0 0 0 ? S< Jan04 0:07 \_ [events/1] root 17 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/2] root 18 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/2] root 19 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/2] root 20 0.0 0.0 0 0 ? R< Jan04 0:07 \_ [events/2] root 21 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [migration/3] root 22 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/3] root 23 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/3] root 24 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [events/3] root 25 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/0] root 26 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/1] root 27 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/2] root 28 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/3] root 29 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [kblockd/0] root 30 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/1] root 31 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/2] root 32 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/3] root 33 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kseriod] root 34 0.0 0.0 0 0 ? S Jan04 0:00 \_ [khungtaskd] root 35 0.0 0.0 0 0 ? S Jan04 0:05 \_ [pdflush] root 36 0.0 0.0 0 0 ? S Jan04 0:06 \_ [pdflush] root 37 0.0 0.0 0 0 ? S< Jan04 1:02 \_ [kswapd0] root 38 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/0] root 39 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/1] root 40 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/2] root 41 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/3] root 42 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsIO] root 43 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 44 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 45 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 46 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 47 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsSync] root 48 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfs_mru_cache] root 49 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/0] root 50 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/1] root 51 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/2] root 52 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/3] root 53 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/0] root 54 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/1] root 55 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/2] root 56 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/3] root 57 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/0] root 58 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/1] root 59 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/2] root 60 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/3] root 61 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 62 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 63 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 64 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 65 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 66 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 67 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 68 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 69 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 70 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 71 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/0] root 72 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/1] root 73 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/2] root 74 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/3] root 77 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/0] root 78 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/1] root 79 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/2] root 80 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/3] root 81 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/0] root 82 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/1] root 83 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/2] root 84 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/3] root 310 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kstriped] root 315 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksnapd] root 1452 0.0 0.0 0 0 ? S< Jan04 4:31 \_ [kjournald] root 1 0.0 0.1 19292 948 ? Ss Jan04 0:15 /sbin/init root 1545 0.0 0.1 13164 1064 ? S Jan04 0:00 upstart-udev-bridge --daemon root 1547 0.0 0.1 17196 996 ? S<s Jan04 0:00 udevd --daemon root 1728 0.0 0.2 20284 1468 ? S< Jan04 0:00 \_ udevd --daemon root 1729 0.0 0.1 17192 792 ? S< Jan04 0:00 \_ udevd --daemon root 1881 0.0 0.0 8192 152 ? Ss Jan04 0:00 dd bs=1 if=/proc/kmsg of=/var/run/rsyslog/kmsg syslog 1884 0.0 0.2 185252 1200 ? Sl Jan04 1:00 rsyslogd -c4 103 1894 0.0 0.1 23328 700 ? Ss Jan04 1:08 dbus-daemon --system --fork root 2046 0.0 0.0 136 32 ? Ss Jan04 4:05 runsvdir -P /etc/service log: gems/custom_require.rb:31:in `require'??from /mnt/app/superfeedr-firehoser/current/script/component:52?/opt/ruby-enterprise/lib/ruby/si root 2055 0.0 0.0 112 32 ? Ss Jan04 0:00 \_ runsv chef-client root 2060 0.0 0.0 132 40 ? S Jan04 0:02 | \_ svlogd -tt ./main root 2056 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_2 root 2059 0.0 0.0 132 40 ? S Jan04 0:29 | \_ svlogd /var/log/superfeedr-firehoser_2 root 2057 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_1 root 2062 0.0 0.0 132 44 ? S Jan04 0:26 \_ svlogd /var/log/superfeedr-firehoser_1 root 2058 0.0 0.0 18708 316 ? Ss Jan04 0:01 cron root 2095 0.0 0.1 49072 764 ? Ss Jan04 0:06 /usr/sbin/sshd root 9832 0.0 0.5 78916 3500 ? Ss 00:37 0:00 \_ sshd: root@pts/0 root 9846 0.0 0.3 17900 2036 pts/0 Ss 00:37 0:00 \_ -bash root 10132 0.0 0.1 15020 1064 pts/0 R+ 09:51 0:00 \_ ps faux root 2180 0.0 0.0 5988 140 tty1 Ss+ Jan04 0:00 /sbin/getty -8 38400 tty1 root 27610 0.0 1.4 47060 8436 ? S Apr04 2:21 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf --config=/etc/denyhosts.conf root 22640 0.0 0.7 119244 4164 ? Ssl Apr05 0:05 /usr/sbin/console-kit-daemon root 10113 0.0 0.0 3904 316 ? Ss 09:46 0:00 /usr/sbin/collectdmon -P /var/run/collectdmon.pid -- -C /etc/collectd/collectd.conf root 10114 0.0 0.2 201084 1464 ? Sl 09:46 0:00 \_ collectd -C /etc/collectd/collectd.conf -f As you can see there is nothing serious here. If I sum up the RSS line on all this, I get the following : # ps -aeo rss | awk '{sum+=$1} END {print sum}' 30096 Which makes sense. However, I have a pretty big surprise when I do a free: # free total used free shared buffers cached Mem: 591180 343684 247496 0 25432 161256 -/+ buffers/cache: 156996 434184 Swap: 1048568 0 1048568 As you can see 60% of the available memory is already consumed... which leaves me with only 40% to run my own applications if I want to avoid swapping. Quite disapointing! 2 questions arise : Where is all this memory? How to take some of it back for my own apps?

    Read the article

  • android app working on simulator but not on phone

    - by raqz
    i have this app that i developed and it works great on the simulator with no errors what so ever. but the moment i try to run the same on the phone for testing, the app crashes stating filenotfoundexception. it says the file /res/drawable/divider_horizontal.9.png is missing. but actually speaking, i have never referenced that file through my code. i believe its a system/os file that is unavailable. i have a custom list view, i guess its the divider there... could somebody please suggest what is wrong here. i believe this is a similar issue discussed here..but i am unable to make any sense out of it http://code.google.com/p/transdroid/issues/detail?id=14 the listview.xml layout file <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:gravity="left|center" android:layout_width="wrap_content" android:paddingBottom="5px" android:paddingTop="5px" android:paddingLeft="5px" > <ImageView android:id="@+id/linkImage" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_marginRight="6dip" android:src="@drawable/icon" /> <LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent"> <TextView android:id="@+id/firstLineView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:textColor="#FFFF00" android:text="first line title"></TextView> <TextView android:id="@+id/secondLineView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="second line title" android:layout_marginLeft="10px" android:gravity="center" android:textColor="#0099CC"></TextView> </LinearLayout> </LinearLayout> the main xml file that calls the listview.xml <?xml version="1.0" encoding="UTF-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="40px"> <Button android:id="@+id/todayButton" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Today" android:textSize="12sp" android:gravity="center" android:layout_weight="1" /> <Button android:id="@+id/tomorrowButton" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Tomorrow" android:textSize="12sp" android:layout_weight="1" /> <Button android:id="@+id/WeekButton" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Future" android:textSize="12sp" android:layout_weight="1" /> </LinearLayout> <LinearLayout android:id="@+id/listLayout" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <ListView android:id="@+id/ListView01" android:layout_width="fill_parent" android:layout_height="fill_parent" /> <TextView android:id="@id/android:empty" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="No Results" /> </LinearLayout> </LinearLayout> </FrameLayout> and the code for the same is private class EfficientAdapter extends BaseAdapter{ private LayoutInflater mInflater; private String eventTitleArray[]; private String eventDateArray[]; private String eventImageLinkArray[]; public EfficientAdapter(Context context,String[] eventTitleArray,String[] eventDateArray, String[] eventImageLinkArray){ mInflater = LayoutInflater.from(context); this.eventDateArray=eventDateArray; this.eventTitleArray=eventTitleArray; this.eventImageLinkArray =eventImageLinkArray; } public int getCount(){ //return XmlParser.todayEvents.size()-1; return this.eventDateArray.length; } public Object getItem(int position){ return position; } public long getItemId(int position){ return position; } public View getView(int position, View convertView, ViewGroup parent){ ViewHolder holder; if(convertView == null){ convertView = mInflater.inflate(R.layout.listview,null); holder = new ViewHolder(); holder.firstLine = (TextView) convertView.findViewById(R.id.firstLineView); holder.secondLine = (TextView) convertView.findViewById(R.id.secondLineView); holder.imageView = (ImageView) convertView.findViewById(R.id.linkImage); //holder.checkbox = (CheckBox) convertView.findViewById(R.id.star); holder.firstLine.setFocusable(false); holder.secondLine.setFocusable(false); holder.imageView.setFocusable(false); //holder.checkbox.setFocusable(false); convertView.setTag(holder); }else{ holder = (ViewHolder) convertView.getTag(); } Log.i(tag, "Creating the list"); holder.firstLine.setText(this.eventTitleArray[position]); holder.secondLine.setText(this.eventDateArray[position]); Bitmap bitmap; try { bitmap = BitmapFactory.decodeStream((InputStream)new URL("http://eventur.sis.pitt.edu/images/heinz7.jpg").getContent()); } catch (MalformedURLException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (Exception e1) { // TODO Auto-generated catch block bitmap = BitmapFactory.decodeFile("assets/heinz7.jpg");//decodeFile(getResources().getAssets().open("icon.png")); e1.printStackTrace(); } try { try{ bitmap = BitmapFactory.decodeStream((InputStream)new URL(this.eventImageLinkArray[position]).getContent());} catch(Exception e){ bitmap = BitmapFactory.decodeStream((InputStream)new URL("http://eventur.sis.pitt.edu/images/heinz7.jpg").getContent()); } int width = 0; int height =0; int newWidth = 50; int newHeight = 40; try{ width = bitmap.getWidth(); height = bitmap.getHeight(); } catch(Exception e){ width = 50; height = 40; } float scaleWidth = ((float)newWidth)/width; float scaleHeight = ((float)newHeight)/height; Matrix mat = new Matrix(); mat.postScale(scaleWidth, scaleHeight); try{ Bitmap newBitmap = Bitmap.createBitmap(bitmap,0,0,width,height,mat,true); BitmapDrawable bmd = new BitmapDrawable(newBitmap); holder.imageView.setImageDrawable(bmd); holder.imageView.setScaleType(ScaleType.CENTER); } catch(Exception e){ } } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return convertView; } class ViewHolder{ TextView firstLine; TextView secondLine; ImageView imageView; //CheckBox checkbox; } The stack trace 12-12 22:55:25.022: ERROR/AndroidRuntime(11069): Uncaught handler: thread main exiting due to uncaught exception 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): android.view.InflateException: Binary XML file line #6: Error inflating class java.lang.reflect.Constructor 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.createView(LayoutInflater.java:512) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:562) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.rInflate(LayoutInflater.java:617) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.inflate(LayoutInflater.java:407) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at com.eventur.MainActivity$EfficientAdapter.getView(MainActivity.java:566) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.AbsListView.obtainView(AbsListView.java:1274) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ListView.makeAndAddView(ListView.java:1661) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ListView.fillDown(ListView.java:610) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ListView.fillFromTop(ListView.java:673) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ListView.layoutChildren(ListView.java:1519) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.AbsListView.onLayout(AbsListView.java:1113) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.View.layout(View.java:6156) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.ViewRoot.performTraversals(ViewRoot.java:950) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.ViewRoot.handleMessage(ViewRoot.java:1529) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.os.Handler.dispatchMessage(Handler.java:99) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.os.Looper.loop(Looper.java:123) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.app.ActivityThread.main(ActivityThread.java:3977) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at java.lang.reflect.Method.invokeNative(Native Method) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at java.lang.reflect.Method.invoke(Method.java:521) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at dalvik.system.NativeStart.main(Native Method) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): Caused by: java.lang.reflect.InvocationTargetException 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ImageView.<init>(ImageView.java:128) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at java.lang.reflect.Constructor.constructNative(Native Method) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at java.lang.reflect.Constructor.newInstance(Constructor.java:446) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.view.LayoutInflater.createView(LayoutInflater.java:499) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): ... 42 more 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): Caused by: android.content.res.Resources$NotFoundException: File res/drawable/divider_horizontal_dark.9.png from drawable resource ID #0x7f020001 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.content.res.Resources.loadDrawable(Resources.java:1643) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.content.res.TypedArray.getDrawable(TypedArray.java:548) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.widget.ImageView.<init>(ImageView.java:138) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): ... 46 more 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): Caused by: java.io.FileNotFoundException: res/drawable/divider_horizontal_dark.9.png 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.content.res.AssetManager.openNonAssetNative(Native Method) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.content.res.AssetManager.openNonAsset(AssetManager.java:417) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): at android.content.res.Resources.loadDrawable(Resources.java:1636) 12-12 22:55:25.212: ERROR/AndroidRuntime(11069): ... 48 more

    Read the article

< Previous Page | 226 227 228 229 230 231  | Next Page >