Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 104/2386 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • How can data not stored in a DB be accessed from any activity in Android?

    - by jul
    hi, I'm passing data to a ListView to display some restaurant names. Now when clicking on an item I'd like to start another activity to display more restaurant data. I'm not sure about how to do it. Shall I pass all the restaurant data in a bundle through the intent object? Or shall I just pass the restaurant id and get the data in the other activity? In that case, how can I access my restaurantList from the other activity? In any case, how can I get data from the restaurant I clicked on (the view only contains the name)? Any help, pointers welcome! Thanks Jul ListView lv= (ListView)findViewById(R.id.listview); lv.setAdapter( new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,restaurantList.getRestaurantNames())); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { Intent i = new Intent(Atable.this, RestaurantEdit.class); Bundle b = new Bundle(); //b.putInt("id", ? ); startActivityForResult(i, ACTIVITY_EDIT); } }); RestaurantList.java package org.digitalfarm.atable; import java.util.ArrayList; import java.util.List; public class RestaurantList { private List<Restaurant> restaurants = new ArrayList<Restaurant>(); public List<Restaurant> getRestaurants() { return this.restaurants; } public void setRestaurants(List<Restaurant> restaurants) { this.restaurants = restaurants; } public List<String> getRestaurantNames() { List<String> restaurantNames = new ArrayList<String>(); for (int i=0; i<this.restaurants.size(); i++) { restaurantNames.add(this.restaurants.get(i).getName()); } return restaurantNames; } } Restaurant.java package org.digitalfarm.atable; public class Restaurant { private int id; private String name; private float latitude; private float longitude; public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public float getLatitude() { return this.latitude; } public void setLatitude(float latitude) { this.latitude = latitude; } public float getLongitude() { return this.longitude; } public void setLongitude(float longitude) { this.longitude = longitude; } }

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • [C#] How to receive uncrackable data or so ? ;P

    - by Prix
    Hi, I am working on an C# application to communicate with my website and retrieve some information from it, using SSL which is working just fine. Now what i want/need is a way to receive encrypted or codified or obfuscated data that if some one cracks my application they will not be able to decrypt the data because it needs something from the server (api, website) but yet the application needs to decrypt it in order to use it... initally i was thinking of an inside RSA pair or keys, to send and receive the encrypt data but let's consider that someone has cracked the application, they could just replace those keys for keys they have made, so i was looking into some methods but havent found or been able to think of any way to harder this... I was learning about RSA, encryption and such and started developing this as a self learning and got involved with it and now i am trying to figure out a way to receive data like that... I have considered obfuscating and compiling my code with packers and etc but this is not about packing it etc... i am more interested in knowing a better way to secure what i described i know it may or is impossible but yet i am looking forward to some approch. I would appreciate advices, suggestions and C# code samples, if you need more information or anything please let me know.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • faster way to compare rows in a data frame

    - by aguiar
    Consider the data frame below. I want to compare each row with rows below and then take the rows that are equal in more than 3 values. I wrote the code below, but it is very slow if you have a large data frame. How could I do that faster? data <- as.data.frame(matrix(c(10,11,10,13,9,10,11,10,14,9,10,10,8,12,9,10,11,10,13,9,13,13,10,13,9), nrow=5, byrow=T)) rownames(data)<-c("sample_1","sample_2","sample_3","sample_4","sample_5") >data V1 V2 V3 V4 V5 sample_1 10 11 10 13 9 sample_2 10 11 10 14 9 sample_3 10 10 8 12 9 sample_4 10 11 10 13 9 sample_5 13 13 10 13 9 tab <- data.frame(sample = NA, duplicate = NA, matches = NA) dfrow <- 1 for(i in 1:nrow(data)) { sample <- data[i, ] for(j in (i+1):nrow(data)) if(i+1 <= nrow(data)) { matches <- 0 for(V in 1:ncol(data)) { if(data[j,V] == sample[,V]) { matches <- matches + 1 } } if(matches > 3) { duplicate <- data[j, ] pair <- cbind(rownames(sample), rownames(duplicate), matches) tab[dfrow, ] <- pair dfrow <- dfrow + 1 } } } >tab sample duplicate matches 1 sample_1 sample_2 4 2 sample_1 sample_4 5 3 sample_2 sample_4 4

    Read the article

  • CRM@Oracle Series: Showcasing Innovation with Oracle Customer Hub

    - by tony.berk
    When is having too many customers a challenge? It is not something too many people would complain about. But from a data perspective, one challenge is to keep each customer's data consistent across multiple enterprise systems such as CRM, ERP, and all of your other related applications. Buckle your seat belts, we are going a bit technical today... If you have ever tried it, you know it isn't easy. If you haven't, don't go there alone! Customer data integration projects are challenging and, depending on the environment, require sharp, innovative people to succeed. Want to hear from some guys who have done it and succeeded? Here is an interview with Dan Lanir and Afzal Asif from Oracle's Applications IT CRM Systems group on implementing Oracle Customer Hub and innovation. For more interesting discussions on innovation, check out the Oracle Innovation Showcase.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Announcing Sesame Data Browser

    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

  • Implementing Silverlight Coverflow with ADO.NET/WCF Data Services in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). WOOHOO!! My next video is online. In this video, I show how you can implement Silverlight Coverflow like UI using the Telerik silverlight toolset. In this demo, I talk to a picture library running in SharePoint 2010, and use ADO.NET Data Services to load up the various images loaded in the picture library. I then use the Telerik Silverlight toolset  integrated with the ADO.NET Data Services/WCF Data Services, and show a fancy coverflow like UI. As always, very few slides, completely hands-on, all code written in front of your eyes! Enjoy – the video.   And yes, there are a couple of more exciting videos coming! Stay tuned! Comment on the article ....

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

  • Computer Science or Computer Engineering for Data Science and Machine Learning

    - by ATMathew
    I'm a 25 year old data consultant who is considering returning to school to get a second bachelors degree in computer science or engineering. My interest is data science and machine learning. I use programming as a means to an end, and use languages like Python, R, C, Java, and Hadoop to find meaning in large data sets. Would a computer science or computer engineering degree be better for this? I realize that a statistics degree may be even more beneficial, but I'll be at a school which dosn't have a stats department or a computational math department.

    Read the article

  • Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API

    Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API Brett Slatkin The Pipeline API makes it easy to analyze complex data using App Engine. This talk will cover how to build multi-phase Map Reduce workflows; how to merge multiple large data sources with "join" operations; and how to build reusable analysis components. It will also cover the API's concurrency model, how to debug in production, and built-in testing facilities. From: GoogleDevelopers Views: 3320 17 ratings Time: 51:39 More in Science & Technology

    Read the article

  • Reset the controls within asp placeholder

    - by alaa9jo
    I use placeholders very much in many projects,but in one of the projects I was asked to reset the controls within a specific placeholder to their original state after the user has finished from inserting his/her data. As everyone of us know,to keep the controls within a placeholder from disappearing after postbacks you have to recreate them,we add such a code in i.e. page_load,the controls will be recreated and their values will be loaded from viewstate (for some of them like textbox,checkboxs,..etc) automatically,that placeholder contains only textboxs so what I need is to block/ loading/clear that viewstate after inserting the data. First thought: Customizing placeholder I thought about it and tried overriding many methods but no success at all...maybe I'm missing something not sure. Second thought: recreate the controls 2 times: In page_load,I recreate the controls within that placeholder then in button click (the button that saves user's data) I recreate them once more and it worked! I just thought of sharing my experience in that case with everyone in case anyone needed it,any better suggestion(s) is welcomed.

    Read the article

  • Performance tuning of tabular data models in Analysis Services

    - by Greg Low
    More and more practical information around working with tabular data models is starting to appear as more and more sites get deployed.At SQL Down Under, we've already helped quite a few customers move to tabular data models in Analysis Services and have started to collect quite a bit of information on what works well (and what doesn't) in terms of performance of these models. We've also been running a lot of training on tabular data models.It was great to see a whitepaper on the performance of these models released today.Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services was written by John Sirmon, Greg Galloway, Cindy Gross and Karan Gulati. You'll find it here: http://msdn.microsoft.com/en-us/library/dn393915.aspx

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >