Search Results

Search found 1032 results on 42 pages for 'repeated'.

Page 36/42 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

  • Mapping Repeating Sequence Groups in BizTalk

    - by Paul Petrov
    Repeating sequence groups can often be seen in real life XML documents. It happens when certain sequence of elements repeats in the instance document. Here’s fairly abstract example of schema definition that contains sequence group: <xs:schemaxmlns:b="http://schemas.microsoft.com/BizTalk/2003"            xmlns:xs="http://www.w3.org/2001/XMLSchema"            xmlns="NS-Schema1"            targetNamespace="NS-Schema1" >  <xs:elementname="RepeatingSequenceGroups">     <xs:complexType>       <xs:sequencemaxOccurs="1"minOccurs="0">         <xs:sequencemaxOccurs="unbounded">           <xs:elementname="A"type="xs:string" />           <xs:elementname="B"type="xs:string" />           <xs:elementname="C"type="xs:string"minOccurs="0" />         </xs:sequence>       </xs:sequence>     </xs:complexType>  </xs:element> </xs:schema> And here’s corresponding XML instance document: <ns0:RepeatingSequenceGroupsxmlns:ns0="NS-Schema1">  <A>A1</A>  <B>B1</B>  <C>C1</C>  <A>A2</A>  <B>B2</B>  <A>A3</A>  <B>B3</B>  <C>C3</C> </ns0:RepeatingSequenceGroups> As you can see elements A, B, and C are children of anonymous xs:sequence element which in turn can be repeated N times. Let’s say we need do simple mapping to the schema with similar structure but with different element names: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Gamma>C2</Gamma> </ns0:Destination> The basic map for such typical task would look pretty straightforward: If we test this map without any modification it will produce following result: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Alpha>A2</Alpha>  <Alpha>A3</Alpha>  <Beta>B1</Beta>  <Beta>B2</Beta>  <Beta>B3</Beta>  <Gamma>C1</Gamma>  <Gamma>C3</Gamma> </ns0:Destination> The original order of the elements inside sequence is lost and that’s not what we want. Default behavior of the BizTalk 2009 and 2010 Map Editor is to generate compatible map with older versions that did not have ability to preserve sequence order. To enable this feature simply open map file (*.btm) in text/xml editor and find attribute PreserveSequenceOrder of the root <mapsource> element. Set its value to Yes and re-test the map: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Alpha>A3</Alpha>  <Beta>B3</Beta>  <Gamma>C3</Gamma> </ns0:Destination> The result is as expected – all corresponding elements are in the same order as in the source document. Under the hood it is achieved by using one common xsl:for-each statement that pulls all elements in original order (rather than using individual for-each statement per element name in default mode) and xsl:if statements to test current element in the loop:  <xsl:templatematch="/s0:RepeatingSequenceGroups">     <ns0:Destination>       <xsl:for-eachselect="A|B|C">         <xsl:iftest="local-name()='A'">           <Alpha>             <xsl:value-ofselect="./text()" />           </Alpha>         </xsl:if>         <xsl:iftest="local-name()='B'">           <Beta>             <xsl:value-ofselect="./text()" />           </Beta>         </xsl:if>         <xsl:iftest="local-name()='C'">           <Gamma>             <xsl:value-ofselect="./text()" />           </Gamma>         </xsl:if>       </xsl:for-each>     </ns0:Destination>  </xsl:template> BizTalk Map editor became smarter so learn and use this lesser known feature of XSLT 2.0 in your maps and XSL stylesheets.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • How to tell whether your programmers are under-performing?

    - by A Team Lead
    I am a team lead with 5+ developers. I have a developer (let's call him A) who is a good programmer, who writes good clean, easy to understand code. However he is somewhat difficult to manage, and sometimes I wonder whether he is really under-performing or not. Our company requires the developers to indicate the work progress in the bug tracker we use, not so much as to monitor the programmers but to let the stackholders know the progress. The thing is, A only updates a task progress when it is done ( maybe 3 weeks after it is first worked on) and this leaves everyone wondering what is going on in the middle of the development week. He wouldn't change his habit despite repeated probing. ( It's OK, developers hate paperwork, I do, too) Recent 2-3 months he on leave quite often due to various events-- either he is sick, or have to attend a lot of personal events etc. ( It's OK, bad things happen in a string. It's just a coincidence) We define sprints, or roadmaps for each month. And in the beginning of the sprint, we will discuss the amount of work each of the developers have to do in a sprint and the developers get to set the amount of time they need for each task. He usually won't be able to complete all of them. (It's OK, the developers are regularly missing deadlines not due to their fault). If only one or two of the above events happen, I won't feel that A is under-performing, but they all happen together. So I have the feeling that A is under-performing and maybe-- God forbid--- slacking off. This is just a feeling based on my years of experience as programmer. But I could be wrong. It is notoriously hard to measure the work of a programmer, given that not all two tasks are alike, and there lacks a standard objective to measure the commitment of a programmer to your company. It is downright impossible to tell whether the programmer is doing his job or slacking off. All you can do, is to trust them-- yeah, trusting and giving them autonomy is the best way for programmers to work, I know that, so don't start a lecture on why you need to trust your programmers, thank you every much-- but if they abuse your trust, can you know? My question is, how can you tell whether your programmers are under-performing? Surely there are experience team leads who know better than me on this? Outcome: I've a straight talk with him regarding my perception on his performance. He was indignant when I suggested that I had the feeling that he wasn't performing at his best level. He felt that this was a completely unfair feeling. I then replied that this was my feeling and I didn't know whether my feeling was right or not. He would have none of this and ended the discussion immediately. Before he left he said that he "would try to give more to the company" in a very cold tone. I was taken aback by his reaction. I am sure that I offended him in some ways. Not too sure whether that was the right thing to do for me to be so frank with him, though. Extra notes: I hate micromanaging. So all that we have for our software process is Sprint ( where tasks get prioritized and assigned, and at the end of the month, a review of the amount of work done). Developers would require to update the tasks as they go along everyday. There is no standup meeting, or anything of the sort. Mainly because we have the freedom to work from home and everyone cherishes this freedom. Although I am the one who sets the deadline, but the developers will provide the estimate for each tasks and I will decide-- based on the estimate-- the tasks that go into a particular sprint. If they can't finish the tasks at the end of the sprint, I will push them to the next. So theoretically one can just do only 1 or 2 tasks during the whole sprint and then push the remaining 99 tasks to the next sprint and still he will be fine as long as justifies this-- in the form of daily work progress updates

    Read the article

  • Session Report - Modern Software Development Anti-Patterns

    - by Janice J. Heiss
    In this standing-room-only session, building upon his 2011 JavaOne Rock Star “Diabolical Developer” session, Martijn Verburg, this time along with Ben Evans, identified and explored common “anti-patterns” – ways of doing things that keep developers from doing their best work. They emphasized the importance of social interaction and team communication, along with identifying certain psychological pitfalls that lead developers astray. Their emphasis was less on technical coding errors and more how to function well and to keep one’s focus on what really matters. They are the authors of the highly regarded The Well-Grounded Java Developer and are both movers and shakers in the London JUG community and on the Java Community Process. The large room was packed as they gave a fast-moving, witty presentation with lots of laughs and personal anecdotes. Below are a few of the anti-patterns they discussed.Anti-Pattern One: Conference-Driven DeliveryThe theme here is the belief that “Real pros hack code and write their slides minutes before their talks.” Their response to this anti-pattern is an expression popular in the military – PPPPPP, which stands for, “Proper preparation prevents piss-poor performance.”“Communication is very important – probably more important than the code you write,” claimed Verburg. “The more you speak in front of large groups of people the easier it gets, but it’s always important to do dry runs, to present to smaller groups. And important to be members of user groups where you can give presentations. It’s a great place to practice speaking skills; to gain new skills; get new contacts, to network.”They encouraged attendees to record themselves and listen to themselves giving a presentation. They advised them to start with a spouse or friends if need be. Learning to communicate to a group, they argued, is essential to being a successful developer. The emphasis here is that software development is a team activity and good, clear, accessible communication is essential to the functioning of software teams. Anti-Pattern Two: Mortgage-Driven Development The main theme here was that, in a period of worldwide recession and economic stagnation, people are concerned about keeping their jobs. So there is a tendency for developers to treat knowledge as power and not share what they know about their systems with their colleagues, so when it comes time to fix a problem in production, they will be the only one who knows how to fix it – and will have made themselves an indispensable cog in a machine so you cannot be fired. So developers avoid documentation at all costs, or if documentation is required, put it on a USB chip and lock it in a lock box. As in the first anti-pattern, the idea here is that communicating well with your colleagues is essential and documentation is a key part of this. Social interactions are essential. Both Verburg and Evans insisted that increasingly, year by year, successful software development is more about communication than the technical aspects of the craft. Developers who understand this are the ones who will have the most success. Anti-Pattern Three: Distracted by Shiny – Always Use the Latest Technology to Stay AheadThe temptation here is to pick out some obscure framework, try a bit of Scala, HTML5, and Clojure, and always use the latest technology and upgrade to the latest point release of everything. Don’t worry if something works poorly because you are ahead of the curve. Verburg and Evans insisted that there need to be sound reasons for everything a developer does. Developers should not bring in something simply because for some reason they just feel like it or because it’s new. They recommended a site run by a developer named Matt Raible with excellent comparison spread sheets regarding Web frameworks and other apps. They praised it as a useful tool to help developers in their decision-making processes. They pointed out that good developers sometimes make bad choices out of boredom, to add shiny things to their CV, out of frustration with existing processes, or just from a lack of understanding. They pointed out that some code may stay in a business system for 15 or 20 years, but not all code is created equal and some may change after 3 or 6 months. Developers need to know where the code they are contributing fits in. What is its likely lifespan? Anti-Pattern Four: Design-Driven Design The anti-pattern: If you want to impress your colleagues and bosses, use design patents left, right, and center – MVC, Session Facades, SOA, etc. Or the UML modeling suite from IBM, back in the day… Generate super fast code. And the more jargon you can talk when in the vicinity of the manager the better.Verburg shared a true story about a time when he was interviewing a guy for a job and asked him what his previous work was. The interviewee said that he essentially took patterns and uses an approved book of Enterprise Architecture Patterns and applied them. Verburg was dumbstruck that someone could have a job in which they took patterns from a book and applied them. He pointed out that the idea that design is a separate activity is simply wrong. He repeated a saying that he uses, “You should pay your junior developers for the lines of code they write and the things they add; you should pay your senior developers for what they take away.”He explained that by encouraging people to take things away, the code base gets simpler and reflects the actual business use cases developers are trying to solve, as opposed to the framework that is being imposed. He told another true story about a project to decommission a very long system. 98% of the code was decommissioned and people got a nice bonus. But the 2% remained on the mainframe so the 98% reduction in code resulted in zero reduction in costs, because the entire mainframe was needed to run the 2% that was left. There is an incentive to get rid of source code and subsystems when they are no longer needed. The session continued with several more anti-patterns that were equally insightful.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • asp.net mvc2 - controller for master page and code organization

    - by ile
    I've just finished my first ASP.NET MVC (2) CMS. Next step is to build website that will show data from CMS's database. This is website design: #1 (Red box) - displays article categories. ViewModel: public class CategoriesDisplay { public CategoriesDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } } #2 (Brown box) - displays last x articles; skips those from green box #3. Viewmodel: public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public string URLArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } #3 (green box) - Displays last x articles. Uses the same ViewModel as brown box #2 #4 (blue box) - Displays list of upcoming events. Uses dataContext.Model.Event as ViewModel Boxes #1, #2 and #4 will repeat all over the site and they are part of Master Page. So, my question is: what is the best way to transfer this data from Model to Controller and finally to View pages? Should I make a controller for master page and ViewModel class that will wrap all this classes together OR Should I create partial Views for every of these boxes and make each of them inherit appropriate class (if it is even possible that it works this way?) OR Should I put this repeated code in all controllers and all additional data transfer via ViewData, which would be probably the worse way :) OR There is maybe a better and more simple way but I don't know/see it? Thanks in advance, Ile EDIT: If your answer is #1, then please explain how to make a controller for master page! EDIT 2: In this tutorial is described how to pass data to master page using abstract class: http://www.asp.net/LEARN/mvc/tutorial-13-cs.aspx In "Listing 5 – Controllers\MoviesController.cs", data is retrieved directly from database using LINQ, not from repository. So, I wonder if this is just in this tutorial, or there is some trick here and repository can't/shouldn't be used?

    Read the article

  • Why does my layout repeats in device but looks fine on emulator when an image is added in a Relative Layout?

    - by Honey H
    let me try to explain clearly with what I want here. I have a RelativeLayout that contains a header image and the content below it. Now, when i have a header image and a list view below it, the page fits the screen in the device properly, the layout does not repeat. But when I place an image below the header image, the layout repeats in the device. Meaning i could see 2 header images in the device. Some page, I could see half of the header image that is not supposed to be there (repeated header image). However, in emulator, all the pages looks fine and fit the screen nicely. I tried changing to LinearLayout, RelativeLayout inside LinearLayout, Relative Layout, it gave me the same outcome. I hope someone could tell me why this happened. Thanks. Attached is my layout. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:background="@drawable/bg" > <ImageView android:id="@+id/header" android:contentDescription="@string/image" android:src= "@drawable/header" android:layout_width="fill_parent" android:layout_height="wrap_content" android:adjustViewBounds="true" android:scaleType="centerCrop"/> <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:orientation="vertical" > <ImageView android:id="@+id/journal" android:layout_width="150dp" android:layout_height="150dp" android:layout_alignTop="@+id/kick" android:adjustViewBounds="true" android:contentDescription="@string/image" android:padding="10dp" android:src="@drawable/favourite_journal" /> <ImageView android:id="@+id/kick" android:layout_width="150dp" android:layout_height="150dp" android:layout_toRightOf="@+id/journal" android:layout_below="@+id/header" android:adjustViewBounds="true" android:contentDescription="@string/image" android:padding="10dp" android:src="@drawable/favourite_kick" /> <ImageView android:id="@+id/labour" android:layout_width="150dp" android:layout_height="150dp" android:layout_below="@+id/journal" android:layout_alignLeft="@+id/journal" android:adjustViewBounds="true" android:contentDescription="@string/image" android:padding="10dp" android:src="@drawable/favourite_labour" /> <ImageView android:id="@+id/share" android:layout_width="150dp" android:layout_height="150dp" android:layout_below="@+id/kick" android:layout_alignLeft="@+id/kick" android:adjustViewBounds="true" android:contentDescription="@string/image" android:padding="10dp" android:src="@drawable/favourite_share" /> </RelativeLayout> </RelativeLayout>

    Read the article

  • How do I work around an HTML Table rendering bug in IE 7?

    - by osmaniac
    I have a table. Some cells span multiple columns. Some cells span multiple rows and columns. But one row (which spans all columns but the rightmost one) creates an artifact. Part of the text in the cell is erroneously repeated left justified on a new row just below the table. I'm baffled. I tried rendering with and without "table-layout: fixed;". Same result. When I originally composed the design using just HTML and CSS, it looked great. But then I worked it into a page and had to add more columns to my master table the right to hold buttons. These buttons are in three groups, each having their own div to control floating and rewrapping when the window gets narrower. One div has another table inside it that groups a single row of buttons. Thus I have table inside div inside td inside outer table. I would prefer a simpler design, but how? This is what I want to have: ................................................................................... . . . . Four rows of data . Three groups of buttons that can reflow . . With several columns . if window gets narrower . . meticulously layed out, . . . That should not resize . . . when window gets narrower . . ................................................................................... . One more row of data spanning the whole screen which stays below the buttons . ................................................................................... What I was doing was putting the three divs with the buttons in the upper right in a single cell that spanned four rows. What other opportunities does CSS offer? The buttons are not allowed to overlap the data on the left or go past the data line below. The original design had the divs with the buttons NOT in a table with the data, but when the window gets narrow, some of the buttons flow such that they go underneath the data, which looks bad. I would post actual HTML, except it is generated by ASP, huge, with lots of CSS styling, and the feature that lets me view the final HTML is not working at the moment. (Built in security in the application.)

    Read the article

  • Autoconf/Automake "configure.ac:2: option `-Wall' not recognized"

    - by Atmocreations
    Hello I'm trying to start with autoconf / automake for a new project. To get started, I'm reading "Using GNU Autotools" and trying to build the Hello-World-Tutorial. The required files from page 96 (real Page=105 because it's a LaTeX-Presentation) configure.ac, Makefile.am and src/Makefile.am look exactly as stated in the document. After that I tried: $ autoreconf --install configure.ac:2: option `-Wall' not recognized autoreconf: automake failed with exit status: 1 Well, it seems that automake doesn't like the second line: AM_INIT_AUTOMAKE([-Wall -Werror foreign]) Therefore I executed: $ autoreconf -v --install autoreconf: Entering directory `.' autoreconf: configure.ac: not using Gettext autoreconf: running: aclocal autoreconf: configure.ac: tracing autoreconf: configure.ac: not using Libtool autoreconf: running: /usr/bin/autoconf autoreconf: running: /usr/bin/autoheader autoreconf: running: automake --add-missing --copy --no-force configure.ac:2: option `-Wall' not recognized autoreconf: automake failed with exit status: 1 You can see easily that autoconf runs automake --add-missing --copy --no-force which I repeated with the verbose-option. And it only returns this: $ automake -v --add-missing --copy --no-force automake: thread 0: reading autoconf --trace=_LT_AC_TAGCONFIG:\$f:\$l::\$d::\$n::\${::}% --trace=AM_ENABLE_MULTILIB:\$f:\$l::\$d::\$n::\${::}% --trace=AM_SILENT_RULES:\$f:\$l::\$d::\$n::\${::}% --trace=AC_INIT:\$f:\$l::\$d::\$n::\${::}% --trace=_AM_COND_IF:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CONFIG_FILES:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CANONICAL_TARGET:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CONFIG_LIBOBJ_DIR:\$f:\$l::\$d::\$n::\${::}% --trace=AC_FC_SRCEXT:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CANONICAL_HOST:\$f:\$l::\$d::\$n::\${::}% --trace=AM_GNU_GETTEXT:\$f:\$l::\$d::\$n::\${::}% --trace=AC_LIBSOURCE:\$f:\$l::\$d::\$n::\${::}% --trace=AM_INIT_AUTOMAKE:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CANONICAL_BUILD:\$f:\$l::\$d::\$n::\${::}% --trace=AM_AUTOMAKE_VERSION:\$f:\$l::\$d::\$n::\${::}% --trace=_AM_SUBST_NOTMAKE:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CONFIG_AUX_DIR:\$f:\$l::\$d::\$n::\${::}% --trace=sinclude:\$f:\$l::\$d::\$n::\${::}% --trace=AM_PROG_CC_C_O:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CONFIG_LINKS:\$f:\$l::\$d::\$n::\${::}% --trace=AC_REQUIRE_AUX_FILE:\$f:\$l::\$d::\$n::\${::}% --trace=m4_sinclude:\$f:\$l::\$d::\$n::\${::}% --trace=LT_SUPPORTED_TAG:\$f:\$l::\$d::\$n::\${::}% --trace=AM_CONDITIONAL:\$f:\$l::\$d::\$n::\${::}% --trace=AC_CONFIG_HEADERS:\$f:\$l::\$d::\$n::\${::}% --trace=AM_MAINTAINER_MODE:\$f:\$l::\$d::\$n::\${::}% --trace=m4_include:\$f:\$l::\$d::\$n::\${::}% --trace=_AM_COND_ELSE:\$f:\$l::\$d::\$n::\${::}% --trace=AM_GNU_GETTEXT_INTL_SUBDIR:\$f:\$l::\$d::\$n::\${::}% --trace=_AM_COND_ENDIF:\$f:\$l::\$d::\$n::\${::}% --trace=AC_SUBST_TRACE:\$f:\$l::\$d::\$n::\${::}% configure.ac:2: option `-Wall' not recognized Anybody an idea why this doesn't work? My impression is that none of my files are wrong... I would like to use it for compiling C++ code for Linux and Windows (using mingw32-g++). Do you know any base where to start and what I have to pay attention for? I'm on Ubuntu 9.10 64bit. Any help is appreciated. Thanks in advance, regards

    Read the article

  • ASP.NET MVC ajax chat

    - by nccsbim071
    I built an ajax chat in one of my mvc website. everything is working fine. I am using polling. At certain interval i am using $.post to get the messages from the db. But there is a problem. The message retrieved using $.post keeps on repeating. here is my javascript code and controller method. var t; function GetMessages() { var LastMsgRec = $("#hdnLastMsgRec").val(); var RoomId = $("#hdnRoomId").val(); //Get all the messages associated with this roomId $.post("/Chat/GetMessages", { roomId: RoomId, lastRecMsg: LastMsgRec }, function(Data) { if (Data.Messages.length != 0) { $("#messagesCont").append(Data.Messages); if (Data.newUser.length != 0) $("#usersUl").append(Data.newUser); $("#messagesCont").attr({ scrollTop: $("#messagesCont").attr("scrollHeight") - $('#messagesCont').height() }); $("#userListCont").attr({ scrollTop: $("#userListCont").attr("scrollHeight") - $('#userListCont').height() }); } else { } $("#hdnLastMsgRec").val(Data.LastMsgRec); }, "json"); t = setTimeout("GetMessages()", 3000); } and here is my controller method to get the data: public JsonResult GetMessages(int roomId,DateTime lastRecMsg) { StringBuilder messagesSb = new StringBuilder(); StringBuilder newUserSb = new StringBuilder(); List<Message> msgs = (dc.Messages).Where(m => m.RoomID == roomId && m.TimeStamp > lastRecMsg).ToList(); if (msgs.Count == 0) { return Json(new { Messages = "", LastMsgRec = System.DateTime.Now.ToString() }); } foreach (Message item in msgs) { messagesSb.Append(string.Format(messageTemplate,item.User.Username,item.Text)); if (item.Text == "Just logged in!") newUserSb.Append(string.Format(newUserTemplate,item.User.Username)); } return Json(new {Messages = messagesSb.ToString(),LastMsgRec = System.DateTime.Now.ToString(),newUser = newUserSb.ToString().Length == 0 ?"":newUserSb.ToString()}); } Everything is working absloutely perfect. But i some messages getting repeated. The first time page loads i am retrieving the data and call GetMessages() function. I am loading the value of field hdnLastMsgRec the first time page loads and after the value for this field are set by the javascript. I think the message keeps on repeating because of asynchronous calls. I don't know, may be you guys can help me solve this. or you can suggest better way to implement this.

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • IE6 "frame" layout with 100% height and scrollbars

    - by Joe
    I want to achieve a simple "frame" layout with fixed header, fixed left navigation area, and a main content area that fills 100% of the remainder of the viewport with scrollbars if necessary. My best attempt is below - but when I add enough content to the main div to force scrolling, I see that the scrollbar extends below the bottom of the viewport. What am I doing wrong? Or what is IE6 doing wrong and how can I fix it? NB please don't recommend using a more recent browser - I'd love to but can't. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title></title> <style type="text/css"> html, body { height:100%; margin:0; padding:0; overflow:hidden; } div { border:0; margin:0; padding:0; } div#top { background-color:#dddddd; height:100px; } div#left { background-color:#dddddd; float:left; width:120px; height:100%; overflow:hidden; } div#main { height:100%; overflow:auto; } </style> </head> <body> <div id="top">Title</div> <div id="left">LeftNav</div> <div id="main"> Content <p> Lorem ipsum ... </p> ... repeated several times ... </div> </body> </html>

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • Problem with displaying and downloading email attachments using the zend framework

    - by Ali
    Hi guys I'm trying to display emails in my inbox and their respective attachments. At the same time I also wish to be able to download the attachments however I'm kinda stuck here. Here is the code I use to display the attachments: $one_message = $mail->getMessage($i); $one_message->id = $i; $one_message->UID = $mail->getUniqueId($i); // get the contact of this message $email = array_pop(extract_emails_from($one_message->from)); $one_message->contactFrom = $contact_manager->get_person_from_email($email); $one_message->parts = array(); foreach (new RecursiveIteratorIterator($mail->getMessage($i)) as $ii=>$part) { try { $tpart = $part; $one_message->parts[$ii] = $tpart; // put the part of the message indexed with what I assume is its position if (strtok($part->contentType, ';') == 'text/html') { $b = $part->getContent(); $h2t->set_html($b); $one_message->body = $h2t->get_text(); } if (strtok($part->contentType, ';') == 'text/plain') { $b = $part->getContent(); $one_message->body = $part->getContent(); } } catch (Zend_Mail_Exception $e) { // ignore } } And I display the attachments using the following link: ?action=download&element=email-attachment&box=inbox&uid=<?php echo $one_message->UID;?>&id=<?php echo $one_message->id;?>&part=<?php echo $ii;?> The part variable is the index taken from the loop above However when I try to download using the exact same code as above modified slightly: $id = $_GET['id']; $pid = $_GET['part']; $one_message = $mail->getMessage($id); foreach (new RecursiveIteratorIterator($mail->getMessage($id)) as $ii=>$part) { if($pid == $ii): $content = base64_decode($part->getContent()); $cnt_typ = explode(";" , $part->contentType); $name = explode("=",$cnt_typ[1]); $filename = $name[1];//It is the file name of the attachement in browser //This for avoiding " from the file name when sent from yahoomail $filename = str_replace('"'," ",$filename); $filename = trim($filename); header("Cache-Control: public"); header("Content-Type: ".$cn_typ[1]); header("Content-Disposition: attachment; filename=\"" . $filename . "\""); header("Content-Transfer-Encoding: binary"); echo $content; exit; endif; } For some reason it doesn't work at all. I did a var dump and noticed that in the for loop with the reverseiterator teh indexes $ii returned are 1 - 2 - 2 !! Whats going on here how can the indexes be repeated? How can I tell one part from the others?

    Read the article

  • C# searching for new Tool for the tool box, how to template this code

    - by Nix
    All i have something i have been trying to do for a while and have yet to find a good strategy to do it, i am not sure C# can even support what i am trying to do. Example imagine a template like this, repeated in manager code overarching cocept function Returns a result consisting of a success flag and error list. public Result<Boolean> RemoveLocation(LocationKey key) { List<Error> errorList = new List<Error>(); Boolean result = null; try{ result = locationDAO.RemoveLocation(key); }catch(UpdateException ue){ //Error happened less pass this back to the user! errorList = ue.ErrorList; } return new Result<Boolean>(result, errorList); } Looking to turn it into a template like the below where Do Something is some call (preferably not static) that returns a Boolean. I know i could do this in a stack sense, but i am really looking for a way to do it via object reference. public Result<Boolean> RemoveLocation(LocationKey key) { var magic = locationDAO.RemoveLocation(key); return ProtectedDAOCall(magic); } public Result<Boolean> CreateLocation(LocationKey key) { var magic = locationDAO.CreateLocation(key); return ProtectedDAOCall(magic); } public Result<Boolean> ProtectedDAOCall(Func<..., bool> doSomething) { List<Error> errorList = new List<Error>(); Boolean result = null; try{ result = doSomething(); }catch(UpdateException ue){ //Error happened less pass this back to the user! errorList = ue.ErrorList; } return new Result<Boolean>(result, errorList); } If there is any more information you may need let me know. I am interested to see what someone else can come up with. Marc solution applied to the code above public Result<Boolean> CreateLocation(LocationKey key) { LocationDAO locationDAO = new LocationDAO(); return WrapMethod(() => locationDAO.CreateLocation(key)); } public Result<Boolean> RemoveLocation(LocationKey key) { LocationDAO locationDAO = new LocationDAO(); return WrapMethod(() => locationDAO.RemoveLocation(key)); } static Result<T> WrapMethod<T>(Func<Result<T>> func) { try { return func(); } catch (UpdateException ue) { return new Result<T>(default(T), ue.Errors); } }

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • System architecture: simple approach for setting up background tasks behind a web application -- wil

    - by Tim Molendijk
    I have a Django web application and I have some tasks that should operate (or actually: be initiated) on the background. The application is deployed as follows: apache2-mpm-worker; mod_wsgi in daemon mode (1 process, 15 threads). The background tasks have the following characteristics: they need to operate in a regular interval (every 5 minutes or so); they require the application context (i.e. the application packages need to be available in memory); they do not need any input other than database access, in order to perform some not-so-heavy tasks such as sending out e-mail and updating the state of the database. Now I was thinking that the most simple approach to this problem would be simply to piggyback on the existing application process (as spawned by mod_wsgi). By implementing the task as part of the application and providing an HTTP interface for it, I would prevent the overhead of another process that is holding all of the application into memory. A simple cronjob can be setup that sends a request to this HTTP interface every 5 minutes and that would be it. Since the application process provides 15 threads and the tasks are quite lightweight and only running every 5 minutes, I figure they would not be hindering the performance of the web application's user-facing operations. Yet... I have done some online research and I have seen nobody advocating this approach. Many articles suggest a significantly more complex approach based on a full-blown messaging component (such as Celery, which uses RabbitMQ). Although that's sexy, it sounds like overkill to me. Some articles suggest setting up a cronjob that executes a script which performs the tasks. But that doesn't feel very attractive either, as it results in creating a new process that loads the entire application into memory, performs some tiny task, and destroys the process again. And this is repeated every 5 minutes. Does not sound like an elegant solution. So, I'm looking for some feedback on my suggested approach as described in the paragraph before the preceeding paragraph. Is my reasoning correct? Am I overlooking (potential) problems? What about my assumption that application's performance will not be impeded?

    Read the article

  • Deleting items from a ListView using a custom BaseAdapter

    - by HXCaine
    I am using a customised BaseAdapter to display items on a ListView. The items are just strings held in an ArrayList. The list items have a delete button on them (big red X), and I'd like to remove the item from the ArrayList, and notify the ListView to update itself. However, every implementation I've tried gets mysterious position numbers given to it, so for example clicking item 2's delete button will delete item 5's. It seems to be almost entirely random. One thing to note is that elements may be repeated, but must be kept in the same order. For example, I can have "Irish" twice, as elements 3 and 7. My code is below: private static class ViewHolder { TextView lang; int position; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; if (convertView == null) { convertView = mInflater.inflate(R.layout.language_link_row, null); holder = new ViewHolder(); holder.lang = (TextView)convertView.findViewById(R.id.language_link_text); holder.position = position; final ImageView deleteButton = (ImageView) convertView.findViewById(R.id.language_link_cross_delete); deleteButton.setOnClickListener(this); convertView.setTag(holder); deleteButton.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.lang.setText(mLanguages.get(position)); return convertView; } I later attempt to retrieve the deleted element's position by grabbing the tag, but it's always the wrong position in the list. There is no noticeable pattern to the position given here, it always seems random. // The delete button's listener public void onClick(View v) { ViewHolder deleteHolder = (ViewHolder) v.getTag(); int pos = deleteHolder.position; ... ... ... } I would be quite happy to just delete the item from the ArrayList and have the ListView update itself, but the position I'm getting is incorrect so I can't do that. Please note that I did, at first, have the deleteButton clickListener inside the getView method, and used 'position' to delete the value, but I had the same problem. Any suggestions appreciated, this is really irritating me.

    Read the article

  • DOM Elements with same id and jQuery

    - by Steve
    Hi I have multiple elements with the same structure in my application. Second div element's id varies as per the comment id in the db which is unique. There are elements with the id 'vote_up' and 'vote_down'. This gets repeated for each comment.What happens is that, as I mentioned, there are multiple comments. I want to perform an Ajax request. First of this structure functions properly using ajax, but the rest does an http request. Btw I am developing a rails application and I am using jQuery. <div id="post_comment"> john<i> says </i> Comment<br/> <div id="comment_10_div"> **<form action="/comments/vote_up" id="vote_up" method="post">** <div style="margin:0;padding:0;display:inline"> <input name="authenticity_token" type="hidden" value="w873BgYHLxQmadUalzMRUC+1ql4AtP3U7f78dT8x9ho=" /> </div> <input id="Comment_place_id" name="Comment[post_id]" type="hidden" value="3" /> <input id="Comment_id" name="Comment[id]" type="hidden" value="10" /> <input id="Comment_user_id" name="Comment[user_id]" type="hidden" value="2" /> <input name="commit" type="submit" value="Vote up" /> </form> <label id="comment_10">10</label> **<form action="/comments/vote_down" id="vote_down" method="post">** <div style="margin:0;padding:0;display:inline"> <input name="authenticity_token" type="hidden" value="w873BgYHLxQmadUalzMRUC+1ql4AtP3U7f78dT8x9ho=" /> </div> <input id="Comment_place_id" name="Comment[place_id]" type="hidden" value="3" /> <input id="Comment_id" name="Comment[id]" type="hidden" value="10" /> <input id="Comment_user_id" name="Comment[user_id]" type="hidden" value="2" /> <input name="commit" type="submit" value="Vote Down" /> </form> </div> Can you please help me to solve this Thanks

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • bbPress Loop help

    - by Cameron
    Hi I have the following code: <?php if ( bb_forums() ) : ?> <?php while ( bb_forum() ) : ?> <div class="box"> <?php if (bb_get_forum_is_category()) : ?> <h3><a href="<?php forum_link(); ?>"><?php forum_name(); ?></a></h3> <?php continue; endif; ?> <ol> <li class="arrow f_unread"> <a href="<?php forum_link(); ?>"> <strong><?php forum_name(); ?></strong><br /> <?php forum_topics(); ?> Topics / <?php forum_posts(); ?> Replies </a> </li> </ol> </div> <?php endwhile; ?> <?php endif; // bb_forums() ?> Struggling to get it to work how I want which is basically like this: for each category do this: <div class="box"> <h3>CAT</h3> <ol> <li>Forum</li> <li>Forum</li> </ol> </div> so that div should be repeated for each block. if a set of forums has not category, then all of the code would be above would run but just no h3. can anyone help thanks.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42  | Next Page >