Search Results

Search found 58578 results on 2344 pages for 'data retrieval'.

Page 65/2344 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • HTTP 401.3 when PUT, DELETE to ADO.NET Data Service (.svc)

    - by Nate
    I have an ADO.NET Data Service (we'll call it service.svc). When I deploy it to an IIS 6 site with Integrated Windows Authentication turned on, all requests (GET, POST, PUT, and DELETE) work fine for me, because I am an administrator on the box. However, when a non-admin user hits the service, only GET and POST requests work. When they try a PUT or DELETE request, they get an HTTP 401.3 "Access is Denied" error: "Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the web server's administrator to give you access to '...\service.svc'." If I give the "Authenticated Users" local group write access to the .svc file, everything works as it should, but I really don't want to do this (and don't think I should have to do this to get this to work). In fact, I'm confused as to why changing the file permissions would affect this at all, but it definitely seems to be the problem. I've found a couple of different suggestions to fix somewhat similar problems in the Microsoft forums (Here, and I would post more links, but am being told that new users can only post one link in a post), but none of the solutions help. Any help is much appreciated. I am certainly no IIS expert, and this one has got me stumped.

    Read the article

  • Deleting object in NSMutableSet Core Data

    - by Luke
    Hi I am having trouble deleting an object in an NSMutableSet using core Data... I am trying to delete a "player" object in the second section of my tableview. I am getting the error Invalid update: invalid number of rows in section 1. The number of rows contained in an existing section after the update (6) must be equal to the number of rows contained in that section before the update (6), plus or minus the number of rows inserted or deleted from that section (0 inserted, 1 deleted) and plus or minus the number of rows moved into or out of that section (0 moved in, 0 moved out Take a look at my code. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { if (indexPath.section==0){ }else{ Player *p = [self.fetchedResultsController.fetchedObjects objectAtIndex: indexPath.section]; [_team removePlayersObject:p]; [_team.players removeObject:p]; AppDelegate *delegate = [[UIApplication sharedApplication] delegate]; _managedObjectContext = delegate.managedObjectContext; [_managedObjectContext deleteObject:p]; [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationTop]; } } } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { switch(section){ case 0: return 7; case 1: return [self.fetchedResultsController.fetchedObjects count]; } return 0; }

    Read the article

  • Core Data to-many relationship in code

    - by Jan Bezemer
    I have three entities: Session, User and Test. A session has 0-many users and a user can perform 0-6 tests. (I say 0 but in the real application always at least 1 is required, at least 1 user for a session and at least 1 test for a user. But I say 0 to express an empty start.) All entities have their own specific data attributes too. A user has a name, A session has a name, a test has six values to be filled in by the user, and so on. But my issue is with the relationships. How do I set multiple users and have them added to one session (same goes for multiple tests for one user). How do I show the content in a right way? How do I show a session that has multiple users and these users having completed multiple tests? Here's my code so far with regard to issue 1: Session *session = [NSEntityDescription insertNewObjectForEntityForName:@"Session" inManagedObjectContext:context]; session.name = @"Session 1"; User *users = [NSEntityDescription insertNewObjectForEntityForName:@"User" inManagedObjectContext:context]; users.age = [NSNumber numberWithInt:28]; users.session = session; //sessie.users = users; [sessie addUserObject:users]; With regard to issue 2: I can log the session, but I can't get the user(s) logged from a session. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Session" inManagedObjectContext:context]; [fetchRequest setEntity:entity]; NSArray *fetchedObjects = [context executeFetchRequest:fetchRequest error:&error]; for (Session *info in fetchedObjects) { NSLog(@"Name: %@", info.name); NSLog(@"Having problems with this: %@",info.user); //User *details = info.user; //NSLog(@"User: %@", details.age); }

    Read the article

  • Scientific Data processing (Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetitive work tasks. I have some programming experience but none with processing or interpreting large volumes of data so I am seeking your advice (both suggestions of techniques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which I currently identify by eye and is what I'm looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change I'm looking for will be roughly the same and will occur in approximately the same place every time for the correct experiment input. Unfortunately there are a few gotchas that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another run's graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • A good(elegant) way to retrieve records with counts.

    - by user93422
    Context: ASP.NET MVC 2.0, C#, SQL Server 2007, IIS7 I have 'scheduledMeetings' table in the database. There is a one-to-many relationship: scheduledMeeting - meetingRegistration So that you could have 10 people registered for a meeting. meetingRegistration has fields Name, and Gender (for example). I have a "calendar view" on my site that shows all coming events, as well as gender count for each event. At the moment I use Linq to Sql to pull the data: var meetings = db.Meetings.Select( m => new { MeetingId = m.Id, Girls = m.Registrations.Count(r => r.Gender == 0), Boys = m.Registrations.Count(r=>r.Gender == 1) }); (actual query is half-a-page long) Because there is anonymous type use going on I cant extract it into a method (since I have several different flavors of calendar view, with different information on each, and I dont want to create new class for each). Any suggestions on how to improve this? Is database view is the answer? Or should I go ahead and create named-type? Any feedback/suggestions are welcome. My DataLayer is huge, I want to trim it, just dont know how. Pointers to a good reading would be good too.

    Read the article

  • Best way to store data in database when you don't know the type

    - by stiank81
    I have a table in my database that represents datafields in a custom form. The DataField gives some representation of what kind of control it should be represented with, and what value type it should take. Simplified you can say that I have 2 entities in this table - Textbox taking any string and Textbox only taking numbers. Now I have the different values stored in a separate table, referencing the datafield definition. What is the best way to store the data value here, when the type differs? One possible solution is to have the FieldValue table hold one field per possible value type. Now this would certainly be redundant, but at least I would get the value stored in its correct form - simplifying queries later. FieldValue ---------- Id DataFieldId IntValue DoubleValue BoolValue DataValue .. Another possibility is just storing everything as String, and casting this in the queries. I am using .Net with NHibernate, and I see that at least here there is a Projections.Cast that can be used to cast e.g. string to int in the query. Either way in these two solutions I need to know which type to use when doing the query, but I will know that from the DataField, so that won't be a problem. Anyway; I don't think any of these solutions sounds good. Are they? Or is there a better way?

    Read the article

  • Same Data Appear only once.

    - by friendishan
    I have the following code which produces following output:- <? $tablaes = mysql_query("SELECT * FROM members where id='$order[user_id]'"); $user = mysql_fetch_array($tablaes); $idsd=$user['id']; $rPaid=mysql_query("SELECT SUM(`price`) AS total FROM order_history WHERE type!='rent_referral' AND date>'" . strtotime($time1) . "' AND date<'" . strtotime($time2) . "'"); $hdPaid = mysql_fetch_array($rPaid); $sPaid=mysql_query("SELECT SUM(`price`) AS total FROM order_history WHERE user_id='$idsd' AND type!='rent_referral' AND date>'" . strtotime($time1) . "' AND date<'" . strtotime($time2) . "'"); while ($hPaid = mysql_fetch_array($sPaid)) { ?> <td><?=$user['username']?></td> <td><?=$hPaid['total']?></td> <? } ?> </tr> It appears like this http://dl.dropbox.com/u/14384295/darrenan.jpg I want same data to appear only once.. Like Username: Vegas and price with him only once.

    Read the article

  • Bind data of interface properties only

    - by nivpenso
    I am new in all the Entity Framework models and the data bindings. I created an interface and generated a model class from my Candidate table in the db. public interface ICandidate { String ID { get; set; } string Name { get; set; } string Mail { get; set; } } i created a partial class to the generated Candidate model so i will be able to implement the ICandidate interface without changing any generated code. public partial class Candidates : ICandidate { string ICandidate.ID { get { return this.PK; } set { _PK = value; } } string ICandidate.Name { get{ return this._Name; } set { _Name = value; } } string ICandidate.Mail { get { return this._Email; } set { this._Email = value; } } } Of course, the generated class has more properties than the interface has (Like IsDeleted field that is not necessary for the interface). I want to display in a DataGridView all the candidates from the db. But I want that only the interface's properties will be shown as columns in the DataGridView. Is there a way bind only the interface's properties the the DataGridView? In my DB there is a table called Candidate_To_Company with these columns: PK, Candidate_FK, Company_FK, Insertion_Date I would like to bind this table to DataGridView. but instead of displaying Candidate_FK i would like to display the candidate name from ICandidate. Is this possible?

    Read the article

  • Issue with dynamic array Queue data structure with void pointer

    - by Nazgulled
    Hi, Maybe there's no way to solve this the way I'd like it but I don't know everything so I better ask... I've implemented a simple Queue with a dynamic array so the user can initialize with whatever number of items it wants. I'm also trying to use a void pointer as to allow any data type, but that's the problem. Here's my code: typedef void * QueueValue; typedef struct sQueueItem { QueueValue value; } QueueItem; typedef struct sQueue { QueueItem *items; int first; int last; int size; int count; } Queue; void queueInitialize(Queue **queue, size_t size) { *queue = xmalloc(sizeof(Queue)); QueueItem *items = xmalloc(sizeof(QueueItem) * size); (*queue)->items = items; (*queue)->first = 0; (*queue)->last = 0; (*queue)->size = size; (*queue)->count = 0; } Bool queuePush(Queue * const queue, QueueValue value, size_t val_sz) { if(isNull(queue) || isFull(queue)) return FALSE; queue->items[queue->last].value = xmalloc(val_sz); memcpy(queue->items[queue->last].value, value, val_sz); queue->last = (queue->last+1) % queue->size; queue->count += 1; return TRUE; } Bool queuePop(Queue * const queue, QueueValue *value) { if(isEmpty(queue)) return FALSE; *value = queue->items[queue->first].value; free(queue->items[queue->first].value); queue->first = (queue->first+1) % queue->size; queue->count -= 1; return TRUE; } The problem lies on the queuePop function. When I call it, I lose the value because I free it right away. I can't seem to solve this dilemma. I want my library to be generic and modular. The user should not care about allocating and freeing memory, that's the library's job. How can the user still get the value from queuePop and let the library handle all memory allocs/frees?

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Solr Facet Search spring-data-solr

    - by sv1
    I am new to Solr and we are using Spring-data for Solr I have a question may be its too simple but I am unable to comprehend. Basically I need to search on any field but I have a "zip" field as one of the facet fields. I have a Solr repository . (Am not sure if the annotations on the Repository are correct.) public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } In my Service class I am trying to call this methods by Injecting MyRepository like below public class MyService { @Inject MyRepository solrRepo; @Inject SolrTemplate solrTemplate; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)); query.setFacetOptions(new FacetOptions("zip")); //Not sure how to get the Pageable object. //But the repository doesnt accept without it. return solrTemplate.queryForPage(query,{Pageable Instance to be passed here}) } From my jUnit I am loading context files needed for Solr and Spring //In jUnit the test method looks like this service.tryFacets("some value"); and it fails with - method not found in the service class at the call to the repository method. *********EDIT**************** As per ChristophStrobl's advice created a copyfield called multivaluedCopyField and the searchString argument works good. But the facet still isnt working...Now my code looks like this. I get the MyPOJO object as response but the facetcount and the faceted values are missing. public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } My service class looks like public class MyService { @Inject MyRepository solrRepo; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)).setPageRequest new PageRequest(0,5)); query.setFacetOptions(new FacetOptions("zip")); FacetPage<MyPOJO> facetedPOJO= solrRepo.findByQueryandAnno(searchString,query.getPageRequest()); return facetedPOJO; } My jUnit method call is like service.tryFacets("some value");

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • No data when attempting to get JSONP data from cross domain PHP script

    - by Alex
    I am trying to pull latitude and longitude values from another server on a different domain using a singe id string. I am not very familiar with JQuery, but it seems to be the easiest way to go about getting around the same origin problem. I am unable to use iframes and I cannot install PHP on the server running this javascript, which is forcing my hand on this. My queries appear to be going through properly, but I am not getting any results back. I was hoping someone here might have an idea that could help, seeing as I probably wouldn't recognize most obvious errors here. My javascript function is: var surl = "http://...omitted.../pull.php"; var idnum = 5a; //in practice this is defined above alert("BEFORE"); $.ajax({ url: surl, data: {id: idnum}, dataType: "jsonp", jsonp : "callback", jsonp: "jsonpcallback", success: function (rdata) { alert(rdata.lat + ", " + rdata.lon); } }); alert("BETWEEN"); function jsonpcallback(rtndata) { alert("CALLED"); alert(rtndata.lat + ", " + rtndata.lon); } alert("AFTER"); When my javascript is run, the BEFORE, BETWEEN and AFTER alerts are displayed. The CALLED and other jsonpcallback alerts are not shown. Is there another way to tell if the jsoncallback function has been called? Below is the PHP code I have running on the second server. I added the count table to my database just so that I can tell when this script is run. Every time I call the javascript, count has had an extra item inserted and the id number is correct. <?php header("content-type: application/json"); if (isset($_GET['id']) || isset($_POST['id'])){ $db_handle = mysql_connect($server, $username, $password); if (!$db_handle) { die('Could not connect: ' . mysql_error()); } $db_found = mysql_select_db($database, $db_handle); if ($db_found) { if (isset($_POST['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_POST['id'])); } if (isset($_GET['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_GET['id'])); } $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); $rtnjsonobj -> lat = $db_field["lat"]; $rtnjsonobj -> lon = $db_field["lon"]; if (isset($_POST['id'])){ echo $_POST['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } if (isset($_GET['id'])){ echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } $SQL = sprintf("INSERT INTO count (bullshit) VALUES ('%s')", $_GET['id']); $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); } mysql_close($db_handle); } else { $rtnjsonobj -> lat = 404; $rtnjsonobj -> lon = 404; echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; }?> I am not entirely sure if the jsonp returned by this PHP is correct. When I go directly to the PHP script without including any parameters, I do get the following. ({"lat":404,"lon":404}) The callback function is not included, but that much can be expected when it isn't included in the original call. Does anyone have any idea what might be going wrong here? Thanks in advance!

    Read the article

  • JAVA-SQL- Data Migration - ResultSets comparing Failing JUnit test

    - by user1865053
    I CANNOT get this JUnit Test to pass for the life of me. Can somebody point out where this has gone wrong. I am doing a data migration(MSSQL SERVER 2005), but I have the sourceDBUrl and the targetDCUrl the same URL so to narrow it down to syntax errors. So that is what I have, a syntax error. I am comparing the results of a table for the query SELECT programmeapproval, resourceapproval FROM tr_timesheet WHERE timesheetid = ? and the test always fails, but passes for other junit tests I have developed. I created 3 diffemt resultSetsEqual methods and none work. Yet, some other JUnit tests I have developed have PASSED. THE QUERY: SELECT timesheetid, programmeapproval, resourceapproval FROM tr_timesheet Returns three columns timesheetid (PK,int, not null) (populated with a range of numbers 2240 - 2282) programmeapproval (smallint,not null) (populated with the number 1 in every field) resourceapproval (smallint, not null) (populated with a number 1 in every field) When I run the query that is embedded in the code it only returns one row with the programmeapproval and resourceapproval columns and both field populated with the number 1. I have all jdbc drivers correctly installed and tested for connectivity. The JUnit Test is failing at this point according to the IDE. assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); This is the code: /*THIS IS A JUNIT CLASS****? package a7.unittests.dao; import static org.junit.Assert.assertTrue; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Types; import org.junit.Test; import artemispm.tritonalerts.TimesheetAlert; public class UnitTestTimesheetAlert { @Test public void testQUERY_CHECKALERT() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); PreparedStatement stmt = con.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1, 2240); ResultSet sourceVal = stmt.executeQuery(); stmt = conTarget.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1,2240); ResultSet targetVal = stmt.executeQuery(); assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); }} /*END**/ /*THIS IS A REGULAR CLASS**/ package a7.unittests.dao; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; public class UnitTestHelper { static String sourceDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; static String targetDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; public Connection getConnection(String url)throws Exception{ return DriverManager.getConnection(url); } public boolean resultSetsEqual3 (ResultSet rs1, ResultSet rs2) throws SQLException { int col = 1; //ResultSetMetaData metadata = rs1.getMetaData(); //int count = metadata.getColumnCount(); while (rs1.next() && rs2.next()) { final Object res1 = rs1.getObject(col); final Object res2 = rs2.getObject(col); // Check values if (!res1.equals(res2)) { throw new RuntimeException(String.format("%s and %s aren't equal at common position %d", res1, res2, col)); } // rs1 and rs2 must reach last row in the same iteration if ((rs1.isLast() != rs2.isLast())) { throw new RuntimeException("The two ResultSets contains different number of columns!"); } } return true; } public boolean resultSetsEqual (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i) != target.getObject(i)) { return false; } } } return true; } public boolean resultSetsEqual2 (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i).equals(target.getObject(i))) { return false; } } } return true; } } /END***/ /*PASTED NEW CLASS - THIS IS A JUNIT TEST CLASS*/ package a7.unittests.dao; import static org.junit.Assert.*; import java.sql.Connection; import java.sql.DriverManager; import org.junit.Test; public class TestDatabaseConnection { @Test public void testConnection() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); assertTrue(con != null && conTarget != null); } } /**END***/

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • Performance impact: What is the optimal payload for SqlBulkCopy.WriteToServer()?

    - by Linchi Shea
    For many years, I have been using a C# program to generate the TPC-C compliant data for testing. The program relies on the SqlBulkCopy class to load the data generated by the program into the SQL Server tables. In general, the performance of this C# data loader is satisfactory. Lately however, I found myself in a situation where I needed to generate a much larger amount of data than I typically do and the data needed to be loaded within a confined time frame. So I was driven to look into the code...(read more)

    Read the article

  • Oracle Announces Oracle Data Integrator 12c and Oracle GoldenGate 12c

    - by Roxana Babiciu
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. To help customers succeed, Oracle is enhancing its data integration offering with Oracle Data Integrator 12c and Oracle GoldenGate 12c. These flexible and comprehensive solutions help customers capitalize on their data to reduce costs and drive business growth. Read more here

    Read the article

  • Next-Generation Data Integration on Oracle Exadata

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE Companies are currently faced with increasing data volumes and retention times while simultaneously batch windows are shrinking. In the ‘Next-Generation Data Integration on Oracle Exadata’ session we will be discussing how Oracle with its innovative Data Integration solution along with Exadata can help companies tackle that challenge. Oracle Data Integrator and Oracle GoldenGate provide industry-leading performance and scalability for data integration on Oracle Exadata. They are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components which can often be bottlenecks for data movement and transformation. Combined with the extreme performance provided by Exadata our Data Integration products help companies move towards a more efficient and flexible data integration infrastructure. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you’re interested in hearing more about how our customers maximize the performance of their Exadata systems while minimizing batch windows, all without adding more hardware resources join us for the following session: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next-Generation Data Integration on Oracle Exadata  Thursday October, 4th - 11:15AM - 12:15PM Moscone West – Room 3005 We also have many other exciting sessions including 'Oracle Data Integrator Product Update and Future Strategy' on October 2nd at 1:15PM in Moscone West Room 3005. In this session we will discuss the ODI roadmap and its integration with engineered systems such as the Oracle Big Data Appliance. It's a session not to be missed! You can find a list of all the Data Integration sessions happening at Oracle OpenWorld in this document: Focus On Data Integration. If you will not be able to come to OpenWorld, for more information please check out our data sheet Oracle Data Integration Solutions and the Oracle Exadata Database Machine. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Google I/O 2012 - Spatial Data Visualization

    Google I/O 2012 - Spatial Data Visualization Brendan Kenny, Enoch Lau Maps were among the first data visualizations, but they can also provide the backdrop for visualizing your own spatial data. In this session, we'll take a voyage through the world of map based data visualization, arming you with the tools you need to most effectively bring your data to life on a map using the Maps API v3. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1053 26 ratings Time: 01:00:17 More in Science & Technology

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >