Search Results

Search found 64086 results on 2564 pages for 'algebraic data types'.

Page 104/2564 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Storing statistics of multple data types in SQL Server 2008

    - by Mike
    I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be. When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations. This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain? Wondering if I'm missing something here. Is there a better way to do this? Thanks in advance!

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • CreateDelegate with unknown types

    - by Giorgi
    Hello, I am trying to create Delegate for reading/writing properties of unknown type of class at runtime. I have a generic class Main<T> and a method which looks like this: Delegate.CreateDelegate(typeof(Func<T, object>), get) where get is a MethodInfo of the property that should be read. The problem is that when the property returns int (I guess this happens for value types) the above code throws ArgumentException because the method cannot be bound. In case of string it works well. To solve the problem I changed the code so that corresponding Delegate type is generated by using MakeGenericType. So now the code is: Type func = typeof(Func<,>); Type generic = func.MakeGenericType(typeof(T), get.ReturnType); var result = Delegate.CreateDelegate(generic, get) The problem now is that the created delegate instance of generic so I have to use DynamicInvoke which would be as slow as using pure reflection to read the field. So my question is why is that the first snippet of code fails with value types. According to MSDN it should work as it says that The return type of a delegate is compatible with the return type of a method if the return type of the method is more restrictive than the return type of the delegate and how to execute the delegate in the second snippet so that it is faster than reflection. Thanks.

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • c incompatible types in assignment, problem with pointers?

    - by Fantastic Fourier
    Hi I'm working with C and I have a question about assigning pointers. struct foo { int _bar; char * _car[MAXINT]; // this is meant to be an array of char * so that it can hold pointers to names of cars } int foofunc (void * arg) { int bar; char * car[MAXINT]; struct foo thing = (struct foo *) arg; bar = arg->_bar; // this works fine car = arg->_car; // this gives compiler errors of incompatible types in assignment } car and _car have same declaration so why am I getting an error about incompatible types? My guess is that it has something to do with them being pointers (because they are pointers to arrays of char *, right?) but I don't see why that is a problem. when i declared char * car; instead of char * car[MAXINT]; it compiles fine. but I don't see how that would be useful to me later when I need to access certain info using index, it would be very annoying to access that info later. in fact, I'm not even sure if I am going about the right way, maybe there is a better way to store a bunch of strings instead of using array of char *?

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Incompatible types when assigning to type 'struct compartido'

    - by user1660559
    I have one problem with this code. I should create one structure and share it across 5 new process created from the father: #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <sys/sem.h> #include <time.h> struct compartido { int pid1, pid2, pid3, pid4, pid5; int propietario; int contador; int pidpadre; }; struct compartido var; int main(int argc, char *argv[]) { key_t llave1,llavesem; int idmem,idsem; llave1=ftok("/tmp",'a'); idmem=shmget(llave1,sizeof(int),IPC_CREAT|0600); if (idmem==-1) { perror ("shmget"); return 1; } var=shmat(idmem,0,0); /*This line is giving the error*/ /*rest of the code*/ } The exact error is giving is: error: incompatible types when assigning to type 'struct compartido' from type 'void *' I need to put this structure in the shared variable to be able to see and modify all those data from the 6 process (5 children and the father). What I'm doing bad? Thanks in advance and best regards,

    Read the article

  • C: incompatible types in assignment

    - by The.Anti.9
    I'm writing a program to check to see if a port is open in C. One line in particular copies one of the arguments to a char array. However, when I try to compile, it says: error: incompatible types in assignment Heres the code. The error is on the assignment of addr #include <sys/socket.h> #include <sys/time.h> #include <sys/types.h> #include <arpa/inet.h> #include <netinet/in.h> #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <netdb.h> #include <stdlib.h> #include <string.h> #include <unistd.h> int main(int argc, char **argv) { u_short port; /* user specified port number */ char addr[1023]; /* will be a copy of the address entered by u */ struct sockaddr_in address; /* the libc network address data structure */ short int sock = -1; /* file descriptor for the network socket */ port = atoi(argv[1]); addr = strncpy(addr, argv[2], 1023); bzero((char *)&address, sizeof(address)); /* init addr struct */ address.sin_addr.s_addr = inet_addr(addr); /* assign the address */ address.sin_port = htons(port); /* translate int2port num */ sock = socket(AF_INET, SOCK_STREAM, 0); if (connect(sock,(struct sockaddr *)&address,sizeof(address)) == 0) { printf("%i is open\n", port); } if (errno == 113) { fprintf(stderr, "Port not open!\n"); } close(sock); return 0; } I'm new to C, so I'm not sure why it would do this.

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • [C#] How to receive uncrackable data or so ? ;P

    - by Prix
    Hi, I am working on an C# application to communicate with my website and retrieve some information from it, using SSL which is working just fine. Now what i want/need is a way to receive encrypted or codified or obfuscated data that if some one cracks my application they will not be able to decrypt the data because it needs something from the server (api, website) but yet the application needs to decrypt it in order to use it... initally i was thinking of an inside RSA pair or keys, to send and receive the encrypt data but let's consider that someone has cracked the application, they could just replace those keys for keys they have made, so i was looking into some methods but havent found or been able to think of any way to harder this... I was learning about RSA, encryption and such and started developing this as a self learning and got involved with it and now i am trying to figure out a way to receive data like that... I have considered obfuscating and compiling my code with packers and etc but this is not about packing it etc... i am more interested in knowing a better way to secure what i described i know it may or is impossible but yet i am looking forward to some approch. I would appreciate advices, suggestions and C# code samples, if you need more information or anything please let me know.

    Read the article

  • Namespace scoped aliases for generic types in C#

    - by TN
    Let's have a following example: public class X { } public class Y { } public class Z { } public delegate IDictionary<Y, IList<Z>> Bar(IList<X> x, int i); public interface IFoo { // ... Bar Bar { get; } } public class Foo : IFoo { // ... public Bar Bar { get { return null; //... } } } void Main() { IFoo foo; //= ... IEnumerable<IList<X>> source; //= ... var results = source.Select(foo.Bar); } The compiler says: The type arguments for method 'System.Linq.Enumerable.Select(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. It's because, it cannot convert Bar to Func<IList<X>, int, IDictionary<Y, IList<Z>>>. It would be great if I could create type namespace scoped type aliases for generic types in C#. Then I would define Bar not as a delegate, but rather I would define it as an namespace scoped alias for Func<IList<X>, int, IDictionary<Y, IList<Z>>>. public alias Bar = Func<IList<X>, int, IDictionary<Y, IList<Z>>>; I could then also define namespace scoped alias for e.g. IDictionary<Y, IList<Z>>. And if used appropriately:), it will make the code more readable. Now I have inline the generic types and the real code is not well readable:( Have you find the same trouble:)? Is there any good reason why it is not in C# 3.0? Or there is no good reason, it's just matter of money and/or time? EDIT: I know that I can use using, but it is not namespace based - not so convenient for my case.

    Read the article

  • Using delegate Types vs methods

    - by Grant Sutcliffe
    I see increasing use of the delegate types offered in the System namespace (Action; Predicate etc). As these are delegates, my understanding is that they should be used where we have traditionally used delegates in the past (asynchronous calls; starting threads, event handling etc). Is it just preference or is it considered practice to use these delegate types in scenarios such as the below; rather than using calls to methods we have declared (or anonymous methods): public void MyMethod { Action<string> action = delegate(string userName { try { XmlDocument profile = DataHelper.GetProfile(userName); UpdateMember(profile); } catch (Exception exception) { if (_log.IsErrorEnabled) _log.ErrorFormat(exception.Message); throw (exception); } }; GetUsers().ForEach(action); } At first, I found the code less intuitive to follow than using declared or anonymous methods. I am starting to code this way, and wonder what the view are in this regard. The example above is all within a method. Is this delegate overuse.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • C++ Design Question on template types

    - by user231536
    I have a templated class template <typename T> class MyContainerClass For types to be substituted for T, it has to satisfy many requirements: for example, get_id(), int data(), etc. Obviously none of the fundamental types (PODs) are substitutable. One way I can provide this is via wrappers for the PODs that provide these functions. Is this an acceptable way? Another way would be to change the template to: template < typename T, typename C=traits<T> > class MyContainerClass and inside MyContainerClass, call traits::data() instead of data() on T objects. I will specialize traits<int>, traits<const char *> etc. Is this good design ? How do I design such a traits class (completely static methods or allow for inheritance) ? Or are the wrapper classes a good solution? What other alternatives are there?

    Read the article

  • Container of Generic Types in java

    - by Cyker
    I have a generic class Foo<T> and parameterized types Foo<String> and Foo<Integer>. Now I want to put different parameterized types into a single ArrayList. What is the correct way of doing this? Candidate 1: public class MMM { public static void main(String[] args) { Foo<String> fooString = new Foo<String>(); Foo<Integer> fooInteger = new Foo<Integer>(); ArrayList<Foo<?> > list = new ArrayList<Foo<?> >(); list.add(fooString); list.add(fooInteger); for (Foo<?> foo : list) { // Do something on foo. } } } class Foo<T> {} Candidate 2: public class MMM { public static void main(String[] args) { Foo<String> fooString = new Foo<String>(); Foo<Integer> fooInteger = new Foo<Integer>(); ArrayList<Foo> list = new ArrayList<Foo>(); list.add(fooString); list.add(fooInteger); for (Foo foo : list) { // Do something on foo. } } } class Foo<T> {} In a word, it is related to the difference between Foo<?> and the raw type Foo. Update: Grep What is the difference between the unbounded wildcard parameterized type and the raw type? on this link may be helpful.

    Read the article

  • faster way to compare rows in a data frame

    - by aguiar
    Consider the data frame below. I want to compare each row with rows below and then take the rows that are equal in more than 3 values. I wrote the code below, but it is very slow if you have a large data frame. How could I do that faster? data <- as.data.frame(matrix(c(10,11,10,13,9,10,11,10,14,9,10,10,8,12,9,10,11,10,13,9,13,13,10,13,9), nrow=5, byrow=T)) rownames(data)<-c("sample_1","sample_2","sample_3","sample_4","sample_5") >data V1 V2 V3 V4 V5 sample_1 10 11 10 13 9 sample_2 10 11 10 14 9 sample_3 10 10 8 12 9 sample_4 10 11 10 13 9 sample_5 13 13 10 13 9 tab <- data.frame(sample = NA, duplicate = NA, matches = NA) dfrow <- 1 for(i in 1:nrow(data)) { sample <- data[i, ] for(j in (i+1):nrow(data)) if(i+1 <= nrow(data)) { matches <- 0 for(V in 1:ncol(data)) { if(data[j,V] == sample[,V]) { matches <- matches + 1 } } if(matches > 3) { duplicate <- data[j, ] pair <- cbind(rownames(sample), rownames(duplicate), matches) tab[dfrow, ] <- pair dfrow <- dfrow + 1 } } } >tab sample duplicate matches 1 sample_1 sample_2 4 2 sample_1 sample_4 5 3 sample_2 sample_4 4

    Read the article

  • CRM@Oracle Series: Showcasing Innovation with Oracle Customer Hub

    - by tony.berk
    When is having too many customers a challenge? It is not something too many people would complain about. But from a data perspective, one challenge is to keep each customer's data consistent across multiple enterprise systems such as CRM, ERP, and all of your other related applications. Buckle your seat belts, we are going a bit technical today... If you have ever tried it, you know it isn't easy. If you haven't, don't go there alone! Customer data integration projects are challenging and, depending on the environment, require sharp, innovative people to succeed. Want to hear from some guys who have done it and succeeded? Here is an interview with Dan Lanir and Afzal Asif from Oracle's Applications IT CRM Systems group on implementing Oracle Customer Hub and innovation. For more interesting discussions on innovation, check out the Oracle Innovation Showcase.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Announcing Sesame Data Browser

    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

  • Implementing Silverlight Coverflow with ADO.NET/WCF Data Services in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). WOOHOO!! My next video is online. In this video, I show how you can implement Silverlight Coverflow like UI using the Telerik silverlight toolset. In this demo, I talk to a picture library running in SharePoint 2010, and use ADO.NET Data Services to load up the various images loaded in the picture library. I then use the Telerik Silverlight toolset  integrated with the ADO.NET Data Services/WCF Data Services, and show a fancy coverflow like UI. As always, very few slides, completely hands-on, all code written in front of your eyes! Enjoy – the video.   And yes, there are a couple of more exciting videos coming! Stay tuned! Comment on the article ....

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >