Search Results

Search found 75052 results on 3003 pages for 'adf bam data control prod'.

Page 150/3003 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Accessing Server-Side Data from Client Script: Accessing JSON Data From an ASP.NET Page Using jQuery

    When building a web application, we must decide how and when the browser will communicate with the web server. The ASP.NET WebForms model greatly simplifies web development by providing a straightforward mechanism for exchanging data between the browser and the server. With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. On postback, the server sends the entire contents of the web page back to the browser, which then displays this new content. With WebForms we don't need to spend much time or effort thinking about how or when the browser will communicate with the server or how that returned information will be processed by the browser. It just works. While this approach certainly works and has its advantages, it's not without its drawbacks. The primary concern with postback forms is that they require a large amount of information to be exchanged between the browser and the server. Specifically, the browser sends back all of its form fields (including hidden ones, like view state, which may be quite large) and then the server sends back the entire contents of the web page. Granted, there are scenarios where this large quantity of data needs to be exchanged, but in many cases we can use techniques that exchange much less information. However, these techniques necessitate spending more time and effort thinking about how and when to have the browser communicate with the server and intelligently deciding on what information needs to be exchanged. This article, the first in a multi-part series, examines different techniques for accessing server-side data from a browser using client-side script. Throughout this series we will explore alternative ways to expose data on the server so that it can be accessed from the browser using script; we will also examine various tools for communicating with the server from JavaScript, including jQuery and the ASP.NET AJAX library. Read on to learn more! Read More >

    Read the article

  • SQL SERVER – Standards Support, Protocol, Data Portability – 3 Important SQL Server Documentations for Downloads

    - by pinaldave
    I have been working with SQL Server for more than 8 years now continuously and I like to read a lot. Some time I read easy things and sometime I read stuff which are not so easy.  Here are few recently released article which I referred and read. They are not easy read but indeed very important read if you are the one who like to read things which are more advanced. SQL Server Standards Support Documentation The SQL Server standards support documentation provides detailed support information for certain standards that are implemented in Microsoft SQL Server. Microsoft SQL Server Protocol Documentation The Microsoft SQL Server protocol documentation provides technical specifications for Microsoft proprietary protocols that are implemented and used in Microsoft SQL Server 2008. Microsoft SQL Server Data Portability Documentation The SQL Server data portability documentation explains various mechanisms by which user-created data in SQL Server can be extracted for use in other software products. These mechanisms include import/export functionality, documented APIs, industry standard formats, or documented data structures/file formats. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • WebCenter .NET Accelerator - Microsoft SharePoint Data via WSRP

    - by john.brunswick
    Platforms in the enterprise will never be homogeneous. As much as any vendor would enjoy having their single development or application technology be exclusively adopted by customers, too much legacy, time, education, innovation and vertical business needs exist to make using a single platform practical. JAVA and .NET are the two industry application platform heavyweights and more often than not, business users are leveraging various systems in their day to day activities that incorporate applications developed on top of both platforms. BEA Systems acquired Plumtree Software to complete their "liquid" view of data, stressing that regardless of a particular source system heterogeneous data could interoperate at not only through layers that allowed for data aggregation, but also at the "glass" or UI layer. The technical components that allowed the integration at the glass thrive today at Oracle, helping WebCenter to provide a rich composite application framework. Oracle Ensemble and the Oracle .NET Application Accelerator allow WebCenter to consume and interact with the UI layers provided by .NET applications and a series of other technologies. The beauty of the .NET accelerator is that it can consume any .NET application and act as a Web Services for Remote Portlets (WSRP) producer. I recently had a chance to leverage the .NET accelerator to expose a ASP .NET 2.0 (C#) application in the WebCenter UI (pictured above) and wanted to share a few tips to help others get started with similar integrations. I was using two virtual machines for the exercise - one with Windows Server 2003, running SharePoint and the other running WebCenter Spaces 11g. For my sample application data I ended up using SharePoint 2007 lists and calendars (MOSS 2007) to supply results using a .NET API for SharePoint.

    Read the article

  • Filtering a Grid of Data in ASP.NET MVC

    This article is the fourth installment in an ongoing series on displaying a grid of data in an ASP.NET MVC application. The previous two articles in this series - Sorting a Grid of Data in ASP.NET MVC and Displaying a Paged Grid of Data in ASP.NET MVC - showed how to sort and page data in a grid. This article explores how to present a filtering interface to the user and then only show those records that conform to the filtering criteria. In particular, the demo we examine in this installment presents an interface with three filtering criteria: the category, minimum price, and whether to omit discontinued products. Using this interface the user can apply one or more of these criteria, allowing a variety of filtered displays. For example, the user could opt to view: all products in the Condiments category; those products in the Confections category that cost $50.00 or more; all products that cost $25.00 or more and are not discontinued; or any other such combination. Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Remote Data connection in iphone app

    - by Tariq- iPHONE Programmer
    Hello, i am working with Social Networking iphone app which require remote data connection. So i hired a php developer in order to provide me RESTful services. But when i start working with him, he arguing me that he will not make stored procedures and web services. Instead of he suggested me to pass query as a parameter. Suppose If I have to call Search service, he told me to send POST request with 3 parameters: Query="select * from users", username=abd and password = 123 And i thing there is no such architecture in order to use remote data. Then he is saying it is possible through socket programming. And I am 100% sure this is not an appropriate way to access remote data. This is simply illogical. Thousands of iphone application using REST/SOAP services to make remote data connection He just declined me to provide RESTful services. Please its my heartily advice to all developers that post your own views over here. So that I can show to that developers that these are the views from all developers worldwide.

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • MVVM - child windows and data contexts

    - by GlenH7
    Should a child window have it's own data context (View-Model) or use the data context of the parent? More broadly, should each View have its own View-Model? Are there are any rules to guide making that decision? What if the various View-Models will be accessing the same Model? I haven't been able to find any consistent guidance on my question. The MS definition of MVVM appears to be silent on child windows. For one example, I have created a warning message notification View. It really didn't need a data context since it was passed the message to display. But if I needed to fancy it up a bit, I would have tapped the parent's data context. I have run into another scenario that needs a child window and is more complicated than the notification box. The parent's View-Model is already getting cluttered, so I had planned on generating a dedicated VM for the child window. But I can't find any guidance on whether this is a good idea or what the potential consequences may be. FWIW, I happen to be working in Silverlight, but I don't know that this question is strictly a Silverlight issue.

    Read the article

  • Best Persistence choice for J2EE-App with frequently changing Data Model

    - by Ben-G
    Whenever I develop a J2EE-Application, I at some point decide to switch from my dummy Persistence (Simply Using Lists and other Data Structures) to some Sort of Database Persistence. Mostly when I hope the Data Model is more or less complete. From this point on, changes to the data model become exhausting, but unluckily they occur rather often. I've used different Object-Relational-Mappers (iBatis, Hibernate) for my projects. They definitely reduce the pain coming with Data Model changes, but they anyway let me adjust code/configuration at 3 or 4 places for every single change. To me, that's cumbersome and error prone. I made a better experience with DB4O, which simply persists Java Objects as they are, but I believe it's performance does not scale for huge applications. Is there anyway to maintain performance while letting out all the ugly configuration work? I'm seeking a performant framework which really hides persistence from my code. Wish for thinking? Or am I missing out THE technology? Hope you can help.

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • The Oracle MDM Portfolio & Strategy Session - It All Comes Down to Master Data

    - by Mala Narasimharajan
     By Narayana Machiraju We are less than a week now from the start of Oracle Open World 2012 and I would like to introduce you all to one of the most awaited MDM strategy sessions this year titled “What’s there to Know about Oracle’s Master Data Management Portfolio and Roadmap?”. Manouj Tahiliani, Senior Director of MDM Product Strategy provides you a complete picture of the Oracle MDM Portfolio, the Product releases, the Strategy and the Roadmaps. Manoj will be discussing Oracle Fusion MDM applications, the first enterprise-grade SaaS MDM product suite. You’ll hear strategies for leveraging MDM and data quality in the enterprise and how you can derive business value by deploying an MDM foundation for strategic initiatives such as customer experience management, product innovation, and financial transformation. And as a bonus, he is also going to discuss the confluence of MDM with emerging technologies such as big data, social, and mobile. The session is co-presented by GEHC and Westpac. Tony Craddock from Westpac is going to share the insights of their MDM Implementation in the lines of Business drivers, data governance, ROI and other important implementation considerations. A reprsentative from GEHC is going to talk about their MDM journey and the multi-domain MDM story. I strongly recommend yo not miss this important session The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here. 

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • Flashback Data Archives: Ein gutes Gedächtnis für DBA und Entwickler

    - by Heinz-Wilhelm Fabry (DBA Community)
    Daten werden gespeichert und zum Teil lange aufbewahrt. Mitunter werden Daten nach ihrer ersten Speicherung geändert, vielleicht sogar mehrfach. Je nach gesetzlicher oder betrieblicher Vorgabe müssen die Veränderungen sogar nachverfolgbar sein. Damit sind zugleich Mechanismen gefordert, die sicherstellen, dass die Folge der Versionen lückenlos ist. Und implizit bedeutet das zusätzlich, dass die Versionen auch vor Löschen und Verändern geschützt sein müssen. Das Versionieren kann über die Anwendung, mit der die Daten auch erfasst werden, erfolgen, über Trigger oder über besondere Werkzeuge. Jede dieser Lösungen hat ihre eigenen Schwächen. Zusätzlich steht die Frage nach dem Schutz vor unerlaubtem Löschen oder Ändern versionierter Daten im Raum. Flashback Data Archives lösen diese Frage, denn sie bieten nicht nur einen wirksamen Mechanismus zum Versionieren von Datensätzen, sondern sie schützen diese Versionen auch vor Veränderung und löschen sie schließlich sogar automatisch nach Ablauf ihrer Aufbewahrungsfrist.Ursprünglich wurden die Archive als eigenständige Option zur Enterprise Edition der Oracle Database 11g unter dem Namen Total Recall eingeführt. Ende Juni 2012 verloren die Flashback Data Archives ihren Status als eigenständige Option. Weil die Archive aber grundsätzlich komprimiert wurden, hat Oracle sie stattdessen zu einem Feature der Advanced Compression Option der Enterprise Edition (ACO) gemacht. Seit der Version 11.2.0.4 der Datenbank ist das Komprimieren aber für die Archive nicht mehr zwangsläufig, sondern optional. Damit gibt es lizenzrechtlich erneut eine Änderung: Wer die Kompression verwendet, der muss nach wie vor ACO lizensieren. Wer die Flashback Data Archives dagegen ohne Kompression verwendet - also zum Beispiel Entwickler -, dem stehen sie ab 11.2.0.4 aufwärts im Lieferumfang aller Editionen der Datenbank zur Verfügung. Diese Änderung ist in den Handbüchern zur Lizensierung der Versionen 11.2 und 12.1 der Datenbank dokumentiert. Im Rahmen der DBA Community ist bereits über die Flashback Data Archives berichtet worden. Der hier vorliegende Artikel ersetzt alle vorangegangenen Beiträge zum Thema.

    Read the article

  • Hack a Linksys Router into a Ambient Data Monitor

    - by Jason Fitzpatrick
    If you have a data source (like a weather report, bus schedule, or other changing data set) you can pull it and display it with an ambient data monitor; this fun build combines a hacked Linksys router and a modified toy bus to display transit arrival times. John Graham-Cumming wanted to keep an eye on the current bus arrival time tables without constantly visiting the web site to check them. His workaround turns a hacked Linksys router, a display, a modified London city bus (you could hack apart a more project-specific enclosure, of course), and a simple bit code that polls the bus schedule’s API, into a cool ambient data monitor that displays the arrival time, in minutes, of the next two buses that will pass by his stop. The whole thing could easily be adapted to another API to display anything from stock prices to weather temps. Hit up the link below for more information on the project. Ambient Bus Arrival Monitor Hacked from Linksys Router [via Make] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >